Integrated Intelligent Manufacturing for Steel Industries

187
Integrated Intelligent Manufacturing for Steel Industries (I2MSteel) EUR 28453 EN ISSN 1831-9424 (PDF) ISSN 1018-5593 (Printed)

Transcript of Integrated Intelligent Manufacturing for Steel Industries

Integrated Intelligent Manufacturing

for Steel Industries(I2MSteel)

Research and Innovation EUR 28453 EN

ISSN 1831-9424 (PDF)ISSN 1018-5593 (Printed)

EUR 28453

Integrated Intelligent Manufacturing for Steel Industries (I2M

Steel)EU

EUROPEAN COMMISSION Directorate-General for Research and Innovation Directorate D — Industrial Technologies Unit D.4 — Coal and Steel

E-mail: [email protected] [email protected]

Contact: RFCS Publications

European Commission B-1049 Brussels

European Commission

Research Fund for Coal and SteelIntegrated Intelligent Manufacturing

for Steel Industries (I2MSteel)

Dr. Gaël MathisArcelorMittal Maizières Research SA

Voie Romaine - Maizières-les-Metz (FRANCE)

Reiner Pevestorf, Norbert Goldenberg, Dr. Martin Rößiger, Dr. Sonja Zillner, Martin Schneider, Artur Papiez

SIEMENS-AGSchuhstraße 60 - DE-91052 Erlangen (Germany)

Francesca Marchiori, Costanzo Pietrosanti, Luca PiedimonteCentro Sviluppo Materiali S.p.A.

Via di Castel Romano, 100 - IT-128 Roma (Italy)

Dr. Alexander Ebel, Dr. Marcus NeuerVDEh–Betriebsforschungsinstitut GmbH

Sohnstrasse 65 - DE-40237 Düsseldorf (Germany)

Stéphane Mouton, Nikolaos MatskanisCETIC

Rue des Frères Wright 29/3 - BE-6041 Cherleroi (Belgium)

Grant Agreement RFSR-CT-2012-00038 1 July 2012 to 31 December 2015

Final report

Directorate-General for Research and Innovation

2017 EUR 28453 EN

LEGAL NOTICE

Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of the following information.

The views expressed in this publication are the sole responsibility of the authors and do not necessarily reflect the views of the European Commission.

More information on the European Union is available on the Internet (http://europa.eu). Cataloguing data can be found at the end of this publication. Luxembourg: Publications Office of the European Union, 2017

Print ISBN 978-92-79-65608-8 ISSN 1018-5593 doi:10.2777/598762 KI-NA-28-453-EN-C PDF ISBN 978-92-79-65607-1 ISSN 1831-9424 doi:10.2777/25469 KI-NA-28-453-EN-N

© European Union, 2017 Reproduction is authorised provided the source is acknowledged.

Europe Direct is a service to help you find answers to your questions about the European Union

Freephone number (*):00 800 6 7 8 9 10 11

(*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.

3

Table of contents

1 Final summary 5

2 Scientific and technical description of the results 15

2.1 Objectives of the project 15

2.2 Description of activities and discussion 17

2.2.1 Work package 1: Requirement analysis 17

2.2.2 Work package 2: Development of overall, system and application

architecture and development of semantic model 29

2.2.3 Work package 3: Agent Holon Development 57

2.2.4 Work package 4: Services Development 85

2.2.5 Work package 5: System integration, implementation of

demonstration example, simulation, adaptation 97

2.2.6 Work package 6: Test Phase, Evaluation of test results and

investigation of transferability 99

2.3 Conclusions, indicating the achievements made. 107

2.4 Exploitation and impact of the research results 109

3 List of figures 111

4 List of tables 113

5 List of acronyms and abbreviations: 115

6 List of References 119

10 Appendixes 121

4

5

1 Final summary

Work package 1: Requirement analysis

Task 1.1: Identification of specific and global goals

In this work package we identify the needs from the steel producers and from automation supplier

for the platform. We analyse different possible application scenarios in order to identify a broad

spectrum of requirements for the system to be developed. To perform this task partners have iden-

tified first of all general goals for the systems covering major roadblocks in the current industrial

systems: integration, extendibility and transferability of the system. Further we have analysed

different potential uses cases: Alternatives Thickness, Order recovery, Slab insertion into rolling

program, Feedback loop from pickling line to cooling section HSM, Energy market orientated pro-

duction planning, Energy Management, Rescheduling. From which we have selected with the

agreement of the plant experts two uses cases for demonstration: alternative Thickness and Real-

location.

Task 1.2: Definition of industrial needs and requirements

The aim of this task is to investigate the needs and requirement regarding the implementation and

integration of the I2MSteel solution in the industrial environment. A special differentiation is made

between point of view of information systems and the point of view of production.

Information systems: One of the major problems in the current situation at the steel plants is the

missing interoperability of the different systems. No standards exist which would allow an easy

integration of additional information system in the existing environment. The next generation of

automation systems will leave the pyramids structure and make available the required services

global.

Production: I2MSteel establishes with its approach especially in brown field projects the possibility

to overcome the hierarchical borders. Improving the level of service (lead times, quality) while

reducing production cost are the main needs of the plants. We have defined with plant managers

where the I2MSteel system could contribute to these global needs.

Task 1.3 Specification of demonstration example details

This task is devoted to analysis of details for the two selected use cases.

Alternative Thickness: The use case appears before the finishing mill as consequence of a disturb-

ance within a process or in general due aberration of some product properties (E.g. too cold tem-

perature of a slab) leading to the fact that the slab cannot be rolled with the desired thickness.

After the occurrence of this problem the operator has only a short time, around tens of seconds to

react in order to find a new relevant thickness for the bar to be rolled.

Reallocation: For reallocation, we consider the case when the produced metal is not compliant with

the order and then is des-allocated which means at the end of the current process the product no

longer continue its transformation but is stock in a yard and wait for an allocation on a new order

6

The principle is then to collect a list of orders and a list of non-allocated material to evaluate the

compliancy using rules defined by experts.

Task 1.4: User Requirements Analysis, Task 1.5: System requirement analysis regarding the

demonstration example

These both tasks are devoted to the requirement analysis with view to the selected use cases and

covering different perspectives:

User (functional) requirements (Task 1.4): here the analysis of required function has been per-

formed including aspects such as: Broker, Agents, Market place Communication, Persistence, Con-

figuration Logging, User interfaces, Ontologies etc.

System (non-functional) requirements (Task 1.5): determine the criteria’s for the operation of the

system including aspects as: Performance, Interoperability, Availability, Usability, Extendibility,

Testability, Deployment, Maintainability, Security, Scalability etc.

Task 1.6: Performance measurement

The aim of this task is to define performance indicators which allow measuring the efficiency of the

solution and estimation of the benefit. We consider following dimensions:

IT indicators: refer to performance measurements focusing on technological aspects of the solution

e.g. Response time, Availability, Mean time between failures, Resource efficiency (CPU, Memory,

Network..)

Business indicators: refer to business benefits provided by the solution e.g. number of affected

tonnage, potential for cost decrease, improvement of efficiency

Work package 2: Development of overall, system and application architecture

and development of semantic model

Task 2.1: Creation of technical environment

The goal of this task was to define and to setup the technical environment required for the devel-

opment and implementation activities. Generally, this task can be subdivided in three following

subtasks:

• Selection of agent platform: different available multi-agent systems (e.g. JADE, JIAC,

D’ACCORD) have been evaluated and one of them D’ACCORD has been selected for the realisa-

tion of the use cases.

• Selection of development languages and environment: based on outcomes of WP1 the require-

ments for different frameworks and development languages have been defined in order to fulfil

the constraints of IT landscape which mainly consist :

o to access to legacy databases through a service and not with a direct connection

o to integrate communication protocols that are compliant with the standard of au-

tomation (ANSI/ISA-88/95, OPC, OLE)

7

• Selection of project management system: in order to support the development process two

different project management systems Microsoft TFS and Redmine have been analysed and at

the end both have been utilised in the project covering different aspects of the collaborative

work.

Task 2.2: Definition of the development process

Due to the integrative nature of this project, different distribute teams working together on one

solution, it was necessary to define the strategy for the development process. Doing so, we select-

ed the SCRUM methodology for the project management. SCRUM is the way for teams to work

together to develop a product. The key principle of SCRUM is to divide the whole development

scope into small steps (sprints), where each step builds upon previously archived results. Further it

defines different roles (product owner, scrum master, developer) among a flat hierarchy, some

activities (e.g. regular meetings and reviews) and also some formal artifacts (product backlog,

sprint backlog etc.) in order to support the target oriented development.

Additionally to the SCRUM process we defined also the strategy for source code versioning and a

testing concept within this task.

Task 2.3: System and application architecture definition

This task focuses on the development of I2MSteel software architecture and the later application.

Both are closely coupled to the requirements analysis performed in WP1. For the primary applica-

tion we decided to use for the windows based .NET technology but considering also support for

other platforms and operating systems based on “external agent” concept. Further we defined the

structural design of the architecture by introducing the “Broker” entity, which provides the yellow

pages functionality for the agents. Our design of the whole architecture foresee also a possibility to

couple different instances of I2MSteel applications allowing to exchange message between agents

running on distributed systems.

Furthermore we defined so called AHDL (agent holon definition language) which defines the com-

munication protocol for I2MSteel platform.

Task 2.4: Investigation and evaluation of industrial standards regarding ontologies

Within this task a comprehensive analysis and evaluation, in terms of domain coverage, of existing

relevant industrial standards in the manufacturing domain for products and processes along the

production line has been done. The results have been recorded in a report. The main focus of this

report is to establish the basis for identifying and selecting relevant domain knowledge models that

can be re-used and/or adapted for describing the I2MSteel demonstration scenario.

Besides industrial standards as ANSI/ISA-88/95, PSL (process specification language), EISIDEL

(european steel exchange language), OPC (OLE for process control) the report describes also some

most related applications / use cases found in the literature as well some approached and defini-

tions for foundational ontologies.

8

Task 2.5. Development of a semantic model by combination of all necessary manufacturing ontolo-

gies

The overall goal of this task is the domain ontology development for the steel manufacturing do-

main. This domain model aims to describe all relevant concepts needed to describe the selected

use cases within the I2MSteel project. The developed semantic model utilizes a multi – level archi-

tecture representing the different user roles e.g. knowledge engineer, supplier, plant engineer.

Further it comprises six foundational models (Structure model, Measurement model, Storage mod-

el, Order Model and Process Model) covering the different aspects of steel manufacturing domain.

The results gained within this task lead to a submission of a patent application with title “Method

and system for providing data analytics results”, (Patent application number: 15 183 115.6)

Task 2.6 Integration of external and implicit knowledge sources into the semantic model

This task is devoted to the utilization of inference mechanisms for the explication of knowledge

stored inside the I2MSteel semantic model. Doing so we integrated a number of modules in the

semantic media wiki (SMW) e.g. Triple Store Connector or SPARQL (Sparql Protocol And Rdf Query

Language) allowing to perform advanced querying and reasoning techniques. By means of such

SPARQL queries, the implicit knowledge covered within related standards, such as ESIDEL, ISO, or

PSL) can be easily made explicit and can be used within any envisioned semantic application.

Task 2.7 Implementation and Testing of the semantic model

For the implementation of the semantic model we utilised a semantic technology stack based on

sematic media wiki (SMW) and utilising different extensions. SMW as a well-known collaborative

engineering tool provide an intuitive web based interface for the human users. On the other side

the integrated extension like the triple store provides a standardised interface to the knowledge

base for the external applications e.g. as the agent system. The integration of semantic model in-

side the agent platform has been done based on “Semantic Agent” concept. The Semantic agent

provides the access to the knowledge base using the agent communication protocols.

Work package 3: Agent Holon Development

Task 3.1: Functional analysis for agent/holon description and development of general approach of

agent/holon system

In Task 3.1 we laid out the functionality needed to provide a solution for the two selected demon-

stration use cases “Alternative Thickness” and “Reallocation”.

Starting from the functional requirements defined in WP1 a deep analysis of situation at the

demonstration example plant was performed. The use cases presented in WP1 were investigated

and the two most urgent scenarios of the industrial partner were identified to be:

• Alternative Thickness

• Reallocation

9

The first use case “Alternative Thickness” addresses a local issue in the steel production of the

demonstration plant in Dunkirk.

In use case “Reallocation” a much larger scope is regarded. Leaving the horizon of a single plant is

of great industrial interest of AM. Therefore we expand the application of the I2MSteel system to a

division wide inter-plant reallocation of products to existing orders.

The details of functional solutions for the demonstration example plant in Dunkirk and the Business

Division North were worked out.

Task 3.2: Implementation of the agents and holons abstract description

In Task 3.2 we worked out the agents and holons technology to put our solution into practise. CSM,

BFI and SIE derived a general, modular structure to guide the development of agent systems using

class based implementation strategies; being a diversity of agent functionalities needed, while

most of those share a common set of basic operations. The partners also defined a strategy by

which existing agent technologies are utilized to realize solution concepts for both the framework

and the use cases. A detailed description of the agents and holons necessary for the implementa-

tion of the two use cases were performed.

The functional design was divided in a general part being the basis for the I2MSteel agent frame-

work and being shared among all use cases. A specific detailed functional requirement was then

defined for Alternative Thickness and Reallocation as-is or with reworking use cases.

In Alternative Thickness use case, the example plant fails to roll the specified thickness of an order.

A decision whether the target thickness can be reached or not, is done by the Level-2 system. If

the achievable minimum thickness is outside the tolerance margins [h1, h2] of the original order

the order cannot be fulfilled, consequently triggering our use case. The I2MSteel solution is now

expected, while using achievable minimum thickness and an acceptance window of time ∆t as in-

put, to deliver an alternative value for h. The request is send after the slab left the roughing mill

and before it enters the finishing mill.

Reallocation use case is triggered by selecting a dislocated coil. Some parameters of this product

failed to meet the specified tolerances or it was overproduced and the original order had to be

mandatorily reassigned. In the virtual perspective the deficient coil becomes a product item on a

virtual market. The product can be reallocated as-is or with reworking. In this second case is nec-

essary to calculate the prediction of mechanical characteristics for the final product, to verify the

compliance of the calculated characteristics to available orders and finally to evaluate the costs

necessary for reworking.

Task 3.3: Agents / holons software development

Task 3.3 represents the transfer of our conceptual design into a software system. In this Task we

develop the software.

A series of agents for framework which is based on the Visual C# language was established. These

were checked and verified against the prototypes with respect to numerical equivalence. Based on

the experiences with our algorithmic design studies the meta-agent base class was set in C# to-

gether with most functional agents to construct the solution for the both use cases.

10

The alternative thickness use case has been developed in order to test multi-agents architecture

flexibility to solve online control problems. It is composed by several agents: an agent representing

the level II automation system, an agent to manage the thickness value negotiation for the

I2MSteel platform, an external agent to interface with plant level III automation system through

the SOA, an historic agent being the second actor of the thickness negotiation providing a solution

for the alternative thickness based on historical production of the plant.

For Reallocation use case both specific agents and enhancement of the platform were implement-

ed. Several agents were implemented focused on base services like DB access, Proximity Search

Engine, mechanical characteristic prevision, cost evaluation, order and product with their Graphical

User Interface.

Task 3.4 Integration and local functionality tests

Aim of this task was the integration among Ontology, SOA and Agents/holons on a local environ-

ment.

In order to examine the developed solutions and to perform validity tests a simulation environment

has been installed where the Agent framework, the ontology and the SOA where integrated. The

simulation environment contains the D’ACCORD platform, a fully integrated SOA stack including the

Service Functions, Service Description and Service Access management. The SOA architecture pro-

vides within the project the interface to the data. In order to simulate the data access under most

realistic conditions also a database with copy of real data originating from the ArcelorMittal plant

has been deployed.

Work package 4: Services development

Task 4.1 Implementation of Service abstraction layer

The Services Abstraction Layer (SAL) provides a collection of services and communication interfac-

es using standard protocols that decouple the communication infrastructure of the plant systems

from the agent platform, enable syntactic/structural and semantic interoperability and provide

components that are reusable. The SAL includes a registry service, plant data services, a persis-

tence service and semantic interoperability services. These services are secured using password

authentication and policy based access control filters.

The Service Registry provides the interfaces and functionality for searching, storing and retrieving

service description documents, which provide the interfaces, the endpoint and metadata of the

plant application and data services. The registry supports two types of service description docu-

ments depending on the web service type, SOAP or REST. The agents can use the SAL client

(SOAClient) library to query for available services and retrieve the service description documents

with all the technical specification needed from the web service client to invoke the services.

For the purposes of integration and system tests we have deployed the products and orders data

on the SAL. Two DB technologies have been deployed, the MongoDB database and the Aerospike

database to host the data sets. They are both NoSQL, key-value databases, they are open source

11

and available for free. Additionally RESTful web services have been deployed to provide a REST API

for the data services. The services are designed to run on the servlet container Tomcat 7and are

called by the agents using the SAL client (SOAClient) library.

The persistence service is used for logging of service usage for monitoring, for storing service de-

scriptions for the service registry, for caching services of data values from the plant databases and

for registering the reallocated products. The reallocated products registration is needed by the real-

location use case to avoid double-reallocating of the same product. The persistence service is im-

plemented as a RESTful web service on top of the MongoDB database.

The semantic interoperability services use the I2MSteel ontologies that are structured into two

layers, the high-level domain ontology and the lower level plant ontologies. Interoperability is

achieved by linking the concepts/terms of these two layers by an inheritance relation between

them or by explicitly defined interrelationships. We have also developed a 3rd layer of semantic

linking, which maps the database schema with the plant ontology and effectively adds a SPARQL

(Query Language for RDF) interface to the database. This way the plant databases can also be

accessed and exposed as RDF (Resource Description Framework) graphs and effectively queried

the same way as ontologies. The reallocation use case of the project, which involves multiple plants

with an agent platform each, benefits from this approach when agents i) request for a data store’s

schema details, ii) query a plant’s model to identify manufacturing processes, logistical or other

plant information. The semantic links between concepts that were previously explained associate

the related concepts of the two ontologies and with the addition of the database field provided by

the 3rd mapping layer, the complete path from the I2MSTeel core term to the field of the database

is produced. This path is then cached to a JSON file in order to avoid having the semantic agent

calling the ontology service every time there is a need to access the database. With this caching of

mappings the agents of the platform can be agnostic of the schema of the plant database they

need to access.

The platform is setup to use user/password (current configuration) or certificate based authentica-

tion. The platform’s access control component filters all requests and evaluates them according to

the deployed XACML policy file in terms of the type of action on a resource by the specified user.

This allows control over who is accessing which chunk of data and allows the definition of policies

that authorize or restrict access to sensitive information, thus increase confidence in sharing and

exchanging of data.

Task 4.2 Implementation of SOA based agents / holons communication mechanisms

Two alternative implementations have been identified, the internal SOA capable agent, which is an

agent that can invoke the services and the External SOA based agent that acts as WS proxy be-

tween the D’ACCORD platform and third party services. Both types of agents can support the Web

service protocols SOAP and REST over HTTPS and the security protocols (authentication, access

control and message encryption). For performance reasons we have decided that it is unnecessary

to use the overhead of the agent functionality at our SAL client implementation. Instead we have

developed a library implementing the SAL services client that is used by all the agents that requires

access to the SAL services. The DBAgent is mainly the agent that accesses the Registry and the

12

Data services as described in the previous section, the Proximity agent is the one that accesses the

Persistence service and the semantic agent accesses the ontology services.

Task 4.3 Test-bed setup, software testing and migration paths

For the purposes of integration and system tests we have deployed the products data from “Arce-

lorMittal Division North” and “ArcelorMittal Sagunto” plants on the Data services of the SAL. We

have also deployed the Orders data collection with data from ArcelorMittal on the MongoDB data-

base system. These data represent a selection of Level 2 and Level 3 production data of ArcelorMit-

tal’s Order Management system and plant systems and was delivered to us for the purpose of

building a virtual simulation environment in order to perform integration and testing of the

I2MSteel components and the overall infrastructure.

This task has deployed the Semantic wiki application and the 4store triplestore for the activities of

Tasks 2.4, 2.5, 2.6 of Workpackage 2 as well as for testing the ontology based services that are

being developed in Task 4.1.

Task 4.4 Software packaging and configuration

The I2MSTeel platform is packaged into two bundles. One contains D’ACCORD platform and the

implementation of the agents and the other contains the packages for installing the Service Ab-

straction Layer.

Work package 5: System integration, implementation of demonstration example,

simulation, adaptation

Task 5.1 Integration of final system

The developed solutions are installed on a virtual environment of AM Central-IT in Fos-Sur-Mer.

Dedicated server runs in a virtual machine inside a virtual server architecture. This machine is the

host for the Agent platform and the Service Abstract Layer (SAL).

The Ontology environment is installed on Linux based architecture at the infrastructure of CETIC.

The ontologies are accessible via Internet and can be used inside the project without any re-

strictions. Remote user interface is deployed for the communication with the agent platform. That

means that agent platform and SAL can remain in Fos-Sur-Mer while the client application runs on

the local computers of final users.

Task 5.2 Definition of inputs, exchange protocols and way of agent activation

The aim of this task is to ensure the correct semantics of the communication between local sys-

tems of AM and the solution provided by I2MSteel project. The answer for this problem is the us-

age of term mappings between local terms and terms used within I2MSteel solution. In doing so we

utilise the semantic agent service, which generates automatically a mapping object based on the

definitions made within the ontology and managed by the Semantic Media Wiki. This mapping ob-

13

ject provides the translation of the terms between the internal nomenclature and external one

which is used within the use cases in order to access the data.

Task 5.3 Setup of connections to databases and servers

In order to prevent perturbations on the running production systems, the agent platform has been

not connected directly to the production systems for the first evaluation tests. Instead additional

databases have been installed where the original data required for the execution of the reallocation

use case is regularly transferred. The databases have been installed on the same server as the

platform itself and have been integrated in the Service Abstraction Layer (SAL). The agent platform

accesses these databases through the client side components of the SAL.

Task 5.4 Simulation and adaptation

The aim of this task is the execution of preliminary system tests in order to ensure the reliable

operation of the solution. The work has been performed under real conditions using the real data

and the same IT infrastructure which is used also by final users. Simulation study has been per-

formed at the facilities of AM initially in plant of FosSurMer where the installation of the agent sys-

tem is physically located and afterward in plant of Sagunto where the users are reside. The test

scenario comprised the operational tests as well as functional.

Work package 6: Test Phase, Evaluation of test results and investigation

of transferability

Task 6.1: Validation Specifications

Within this task the experts of AM and SIE evaluated the indicators defined within Task 1.6 with the

reference to the current project development status. Afterward we investigated which additional

data need to be considered for the calculation of the indicators and how this data can be recorded

or created. Finally we defined for each evaluation criteria the strategy for the result generation.

Thus strategies comprise software adjustments which are necessary for execution of evaluation

criteria’s analysis and measurement.

Task 6.2: Demonstration example execution

This task is devoted to the execution of demonstration example, here the reallocation use case. In

doing so product data coming from the plant have been merged with order data provided by the

central orders Database testing in this way the integration of I2Msteel application both with the

Division North and Sagunto legacy DB’s. The preliminary execution tests have been done by the

developer group followed by an extensive use of the system by people from the Sagunto plant in

charge of reallocation.

14

6.3: Final tuning of the system

Within this task the I2MSteel system have been adapted according to the experiences made during

the previous tasks. Here especially a new user interface has been designed and developed consid-

ering the functional, ergonomic and usability aspects of final users. The GUI utilise the web service

technology in order to exchange necessary data with the I2MSteel system allowing remote opera-

tion. So doing the final users can run the GUI application on a local machine while the I2Steel sys-

tem in located on the central server system.

Task 6.4: Investigation of transferability

Within this task transferability of the I2MSteel solution is analysed from both technological as well

as from practical point of view. Generally the transferability is one of the major project objectives

which is archived through the combination of three different technologies. Using the agent system

it is possible to develop universal solutions which are independent of specific plant configuration.

The configuration is part of the oncological model which allowing agents to find the way how to

access the necessary data automatically. And last but not least if specialised implementation of

data accessing or execution of specific algorithms is required, this can be realised within the SOA.

This conceptual design focuses on the transferability as one of the main features.

Task 6.5: Evaluation of results with respect to technical and economic benefits

This task is devoted to the analysis of technical and economic benefits archived with I2MSteel sys-

tem. The analysis is done based on KPI identified within Task 1.6. for both use cases, reallocation

and alternative thickness. Summarising for use case “Reallocation” a potential for earning of 3M€

per year can be reached using the I2MSteel solution, further we identified, that the working load

spent by the plant experts can be reduced by 30% when applying the I2MSteel system.

For the use case “alternative thickness” the evaluation indicate a possible potential of 215k€ per

year for AM Dunkirk plant or ca. 1M€ when considering AM European production.

15

2 Scientific and technical description of the results

2.1 Objectives of the project

The main objective of the proposal is to develop and to demonstrate a new paradigm for a factory-

and company-wide automation and information technology for “Intelligent and Integrated Manufac-

turing” at steel production. To reach this main objective, the following sub objectives are neces-

sary:

1. To describe important aspects of the steel making supply chain (here exemplarily the sup-

ply chain from continuous casting to hot rolling) by using semantic technologies in such a

way, that higher-level automation and information systems have the basis for orientation,

communication and high-level information exchange across the complete supply chain.

2. To develop strategies and concepts for a system of software agents and holons which are

able to perform all the high level tasks of steel production: product tracking, process con-

trol, process planning, through-process quality control, information storage, logistics, etc.

3. To develop and implement in a prototypical fashion the necessary software agents/holons

to realise the developed concepts and to implement solutions for the selected business case

during the demonstration phase of the project.

4. To develop a system of services (Service Oriented Architecture, SOA) which offer the nec-

essary basic routines for the agent level regarding communication, product tracking, nego-

tiation protocols, data storage, event handling, etc.

5. To implement exemplarily parts of the above developed solutions and to demonstrate the

suitability and performance of the new paradigm.

6. To further investigate the transferability of the new paradigm to all possible kinds of pro-

cesses and process chains in the steel industry.

16

17

2.2 Description of activities and discussion

2.2.1 Work package 1: Requirement analysis

Task 1.1: Identification of specific and global goals

The aim of this task is to identify the specific goals originated from demonstration examples from

point of view of a steel produces and as well from special view of automation supplier. Additionally

different possible application scenarios should be investigated in order to identify broad spectrum

of requirements for the system to be developed.

Generally the project aims to develop a new control system architecture (I2MSteel system) based

on the concept of holonic agents enabling a more flexible manufacturing system which gives the

possibility to better exchange information between processes and manufacturing sites in order to

continuously adapt the production based on the knowledge of the current quality of products and

the state of the processes. The paradigm shift of I2MSteel will enable steel producers to apply its

resources in conformance to customers and market demands and makes the overall manufacturing

system management more transparent.

To perform this task, partners identified, first of all, general goals for the systems covering major

roadblocks in the current industrial systems. Following goals has been determined as most im-

portant:

• Integration: one of the major roadblocks during the development of new process optimi-

sations is missing availability of information from different process stages and automation

levels. Often such information are encapsulated within proprietary systems without open

interfaces. Therefore one of the major goals within this project is to break the limitation of

the information exchange by providing a flexible agent based system allowing an easy in-

tegration of different information sources within the common framework.

• Extendibility: a very important goal for I2MSteel system is the possibility to easy extend

the provided solution. In practice often occurs the situation where for a known problem a

solution has been developed. After some time of usage or based on changes in the process

the users gain the knowledge about additional optimisation potential for initial solution. To

integrate these additional potentials usually a complete redesign of the system is required

producing a high barrier for the practical realisation. Therefore one of the major goals is to

provide the ability to extend the solution by new functionalities without the need of chang-

es within the initial approach.

• Transferability: The development of a general solution for a certain type of problems is

often hindered by the non-availability of the common description of the solution environ-

ment. As example a typical situation is that the configuration of process steps and related

data sources are very individual in each plant, therefore usually such individual conditions

are directly integrated in the solution. Thus make the transferability of the developments to

other plants very difficult or even impossible. The aim within this project is to provide a

common description of process and data sources allowing developing easy transferable so-

lutions.

18

In addition to the global goals discussions with plant experts have been performed. The intention was to analyse the general user needs and expectations on the system with focus to problems orig-inating from the practical experience of the users. In this context following goals has been identi-fied:

• Energy optimization by more efficient planning with the inclusion of existing data from

caster, slab yard furnace and rolling mill.

• Optimization of the production steps by inclusion of the automation systems in the different

production areas.

• Comprehensive consideration of the production process by the decision for necessary re-

scheduling in case of unforeseen events.

• Support of scheduling by current information from the slab yard area.

• Decision-making for the production planning by means of strategies taking into account

energy, resources, delivery time, throughput and costs.

• Support of the operator by the decision of exceptional strategies in order to avoid disturb-

ances and scrap material.

Complimentary to the goals investigation we analysed different possible application scenarios in

order to identify the specific requirements for the system. The scenarios have been investigated in

collaboration with plant experts and refer to typical optimisation challenges. Following use cases

has been considered covering different aspects of the I2MSteel system.

• Alternatives Thickness: this use case covers the capability of the system to react to un-

foreseen events. For instance a too low temperature applied to a slab in the furnace of hot

rolling mill preventing to roll it with the desired thickness as requested within the assigned

order. In this case the I2MSteel system should find an alternative rolling thickness for the

slab considering the probability to fulfil an existing or prospective order.

• Order recovery: A coil for a specific order was scrapped and it has to be produced again

with highest priority. The quality manager requests a list of alternative materials. The

I2MSteel system searches in the RF, HSM and the coil yard for alternative materials and

provide this list to the quality manager.

• Slab insertion into rolling program: A customer requests to produce an order urgently.

The program scheduler tries to insert this order as early as possible into an existing rolling

program. The I2Msteel system search for a best slot in schedule.

• Feedback loop from pickling line to cooling section HSM: After the pickling line a

measurement device based on magnetic remanence indirectly measures the yield strength

along the strip. The aim is to control this signal by exploiting a correlation with the coiling

temperature. When a strip passes the pickling line the magnetic remanence and the corre-

sponding coiling temperature are stored in an internal database. This information is used

by a self-adapting model to determine a coiling temperature set point curve for subsequent

strips.

• Energy market orientated production planning: A MES system has scheduled a rolling

program and then request to the I2MSteel System to find an energy cost optimised time

19

slots based on energy prices provided by energy provider. The system should consider en-

ergy consumption of reheating furnace and roughing mill and the rolling time.

• Energy Management: Taking into account the order book list with the relevant con-

straints (lead time, costs, plant limitations etc.) a draft production plan is sent form MES

planning module to local scheduling module. The I2MSteel system performs energy de-

mand and costs forecast and optimise the schedule referring to the energy efficiency.

• Rescheduling: Target schedule cannot be fulfilled by current setup, due to malfunctioning

or disturbance. Data for replanning is accessed, using material flow (rest), order situation

and plant state. The I2MSteel system generates a new optimised schedule which is close as

possible to the original one.

• Reallocation: Errors during the production of metal units, slabs or coils, leads to products

which are not compliant with their initial order. Such slabs or misrolled coils remain on

stock in a yard until they can be allocated to a new order. The I2MSteel should provide a

proposal for the reallocation of these units based on global order book of the ArcelorMittal

Division North (including several plants). The solution should takes into account: quality,

delay of orders, cost of movements (depiling, trucks), scraps to be done, possibility of re-

pairs

For each of these possible application scenarios a detailed analysis of the use case has been per-

formed. To do this we developed a structure for the use case description on the basis of Cockburn1

template.

The structure of the template and description of the analysis of the use cases can be found within

the Appendix 1. The results of the analysis have been discussed with the plant experts of Arce-

lorMittal. Based on their feedback and current demands at ArcelorMittal plants we selected the two

following evaluation scenarios:

• Alternative Thickness

• Re-allocation

These two scenarios will be implemented within the project in order to proof the ability of the

I2MSteel system to provide solutions according to the identified goals.

Task 1.2: Definition of industrial needs and requirements

The aim of this task is to investigate the needs and requirement regarding the implementation and

integration of the I2MSteel solution in the industrial environment. A special differentiation is made

between point of view of information systems and the point of view of production.

Information System:

1 Cockburn use case template is one of the most popular designs for the use case description

with-in the agile software development community

20

One of the major problems in the current situation at the steel plants is the missing interoperability

of the different systems. No standards exist which would allow an easy integration of additional

information system in the existing environment. Integration of new systems is always a journey in

which much more effort (estimate 80%) is put in the setup of its communication with the environ-

ment than in the integration of new software components and functions.

The lack of such a framework also results in a discovery process to find out which elements are

available and within what concept, making integration difficult and resulting in uncoordinated re-

dundancy of data within the system which can jeopardize the consistency of information. As an

example Figure 1 represents the IT architecture of the factory of AM plant Florange which shows

the complexity of the IT infrastructure even if the flow of information between applications is not

described. This is typical brown field situation which finds an automations supplier in case of auto-

mation revamps.

Figure 1. Florange Plant - IT Architecture

Furthermore rigidity is strengthened by the hierarchical control structures, like the automation

pyramid (s. Figure 2). The conceptual design of such systems often corresponds to the organisa-

tional structure of a company and for this reason don’t permit flexible coupling of the systems.

21

Figure 2. Automation pyramid

In the first step it must be the aim to break the borders by the new technology and to allow with it

the exchange of information and interactions direct over all level. The next generation of automa-

tion systems should leave the hierarchical structure and make available the required services glob-

al.

I2MSteel project apply the agent based approach in order to overcome the hierarchical borders

especially with focus to brown field projects.

Production :

The integration of such a new system necessitates the possibility to interact with several systems

which works at different time scales: at the automation the time is the milliseconds, the production

is in seconds and minutes, and at higher level for programming and logistics are hours and days.

The system should be able to manage these different time scales with the right intercommunication

services and agents.

In Brown Field, the system should not disturb the existing system and slow down the production

due to the wait decisions taken by I2MSteel. This is why it will work in open loop until a sufficient

amount of tests has been done to decide to work on close loop.

In ArcelorMittal, Green Fields are not in the perimeter of European installations but after exchanges

with correspondents, we conclude that the proposed system could be an alternative of traditional

manufacturing system in which implementation would be much easier to the usage of new pro-

gramming languages such as C and Java.

For Brownfield installation in Europe, the communication should be obtained by the development of

specific interfaces for instance at the level2 where most of the production controls is managed by

VAX system. For Higher level, some programs are still written in COBOL language which require

dedicated interface for the installation of a services layer.

Cartographies of applications and corresponding communication protocols have thus been done in

order to give the right direction of further development.

22

Task 1.3 Specification of demonstration example details

The preliminary steps in the use case definition have been done with the help of the plant experts:

we listed practical use cases which are seen important for the design of I2MSteel system. As stated

in previous chapters two of the use case has been chosen for the implementation. In following sec-

tions additional information for this two use cases is presented.

Use case Alternative Thickness:

The use case alternative thickness appears before the finishing mill as consequence of a disturb-

ance within a process or in general due aberration of some product properties (E.g. too cold tem-

perature of a slab) leading to the fact that the slab cannot be rolled with the desired thickness.

After the occurrence of this problem the operator has only a short time to react in order to find a

new relevant thickness for the bar to be rolled. Before the roughing mill slabs that don’t fulfill re-

quirements can be reintroduced in the furnace so we consider only bars after the roughing mill,

those for which the width cannot be modified during the rolling. The time is then around tens of

seconds to find a solution.

A study of the production in Dunkirk Plant indicates that around 250 coils which represent 6000

tons are concerned by this unpredicted event. Among these coils, 70% will be re-allocated to other

orders; the rest is downgraded or scrapped which represents 1800 tons per year.

A study made on the production data of Dunkirk plant shows that 90% of this tonnage could be re-

allocated to a prime order when an right alternative thickness would be chosen. The goal of this

use case is to predict the best alternative thickness in order to optimize the probability for the real-

location.

The thickness that can be rolled has a minimum allowed and the agents have to find order above

this minimal thickness. Moreover, for the following bar the rolling thickness difference should not

be too high. Hence we have to find a modified sequence of thickness that locally is compliant with

the following constraint:

(Thickness Bari – Thickness Barj+1) < delta

Delta is a tuning parameter whose value can be set at 0.5 mm by default

Two cases can appear:

• The target thickness of the following bar is compliant with the above constraint and nothing

has to be changed for this bar.

• The target is too sharp and an alternative thickness has to be found for the following bar

too.

Others constraints have to be taken into account (chemical analysis, mechanical characteristics to

obtain, surface aspect). This compliancy is given between the new order and the product the Prox-

imity Search Engine (PSE) described in the re-allocation use case.

Level 2 systems will recalculate dynamically the process conditions to achieve the new target; forc-

es, rolling and coiling temperature.

23

In case of multiple propositions where cost, and delivery time are not the same, we make a priority

on cost due to trimming over the delivery time. This rule could be adapted according to a specific

situation, for instance for rush order.

Use case reallocation:

During the production the metal which is not compliant with the order is des-allocated. That means

that at the end of the current process the product no longer continue its transformation but is stock

in a yard and wait for an allocation to a new order. This des-allocation could have an impact on

the delivery schedule of the customer especially when it occurs at the final step of the production.

To avoid such negative impact, the MES puts a priority on the order with short delivery time. If the

production line is not able to supply the material, an urgency process is triggered which will look

for a compliant material in other plant.

In the project we will consider slabs and hot rolled coil for re-allocation. Anyway first we will focus

on hot rolled coils as for slabs some internal solutions in the plant already exist. The solution

should incorporate the order books of different plants (here AM Division North) and find

• an alternative order for the product with minor transformations (Trimming, Oiling,…)

• an alternative order in a finishing plant (for instance a galvanised order) where the coil fits

with the intermediate order

The general principle is to collect a list of orders and a list of non-allocated material, to translate

into a same language and then to evaluate the compliancy using rules defined by experts (cp. Fig-

ure 3).

Figure 3. Principle of the Proximity Search Engine (PSE)

As the administrative task is very cumbersome to allocate a product to an order which has been

already assigned for the production to a plant we will apply our solutions only to the new incoming

24

orders. That means the possibility to reallocate a product will be checked after a customer per-

formed the booking.

In order to work on products which are relevant for re-allocation, a classification of products was

done. We identified following three types:

• product is compliant with a prime order for non-visible part

• product is compliant at 98% with a prime order for non-visible part which requires some

repairs

• non-prime products

The Proximity Search Engine (PSE) will evaluate the technical match between an order and product.

Generally a technical product can be described by a set of various fields (cp. Figure 4):

• physical properties

• dimensions

• cleanliness and surface aspects

Figure 4. Dimensions of product properties

Each field includes several subcomponents. For each subcomponent an acceptable range is as-

signed. When all the subcomponents are in the assigned ranges, the customer satisfaction theoret-

ically reaches a maximum i.e. 100%. But its known that the performance level don’t drop to zero

as soon a component slightly get out of the specified range. The performance level often looks

rather like a smooth and asymmetrical curve. Within the Figure 4 is an example presented where

for the yield strength the 100% satisfaction is defined between 230-250MPa and for the range out-

side 220-270MPa the performance level is considered to be zero. The space between correspond

to a linear function representing the incremental decrease of the satisfaction.

Finally an organisation should be put in place in order to manage the re-allocation process: the

functional organisation is based on a central team of business resources whose objective is to co-

ordinate the reallocation between plants and commercial teams (Agency, RTC/MTS (technical cus-

tomer relationship).

25

Figure 5. Organisation of business resources

When I2MSteel has found a solution for re-allocation, the central team contacts the expert in

charge of the re-allocation of the plant who owns the product. During this procedure, they consti-

tute a core team to proceed the re-allocation. This core team will have to validate with customer

service of the plant if the order can be transferred and with the technical support manager if the

customer could accept the transfer. The commercial agency will then transfer the order to the plant

who owns the product.

In the future, the central team will have means to proceed by itself all necessary steps :

• get stock level and supply information to make sure that reallocation is still relevant

• integrate internal orders between plants, to create spot orders like a Commercial Agency

does

• create protocols in commercial systems to facilitate the creation of internal orders

Task 1.4: User Requirements Analysis, Task 1.5: System requirement analysis regarding the

demonstration example

Generally in the software development and requirements engineering the differentiation is made

between functional and non-functional requirements. Functional requirements define specific be-

haviour of an application while Non Functional requirements specify the operational criteria of the

solution.

Therefore User requirement can be considered as functional requirements (Task 1.4) and system

requirements as non-functional (Task 1.5). To identify both types of the requirements we per-

formed a depth analysis of the possible use cases presented in the Task 1.1. Based on this analysis

we specified the functional requirements including following aspects:

• Broker for agents: specifies the requirements for “yellow pages” functionality within the

agent system. Yellow pages/ Broker provide the ability to the agents to search for each

other.

Order plantReallocation

RTC/MTS

MU plantReallocation

CommercialAgency

Centralreallocationresources

MU plantCust serviceProd Mgmt

Order plantCust serviceProd Mgmt

Order plantReallocation

RTC/MTS

MU plantReallocation

CommercialAgency

Centralreallocationresources

MU plantCust serviceProd Mgmt

Order plantCust serviceProd Mgmt

CoreTeam

Supportingteam

Legend:

CoreTeam

Supportingteam

Legend:

26

• Software agents: specifies the requirements for the agents themselves

• Agent marketplace: specifies requirements for infrastructure which regulates the agent

negotiations

• Communication with external systems: specifies the requirements for the integration

of the agent system in foreign environment

• Communication between installations: specifies the requirements for the collaboration

of different installations of the system

• Communication between agents and the market place: specifies the requirements for

negotiation procedure

• Persistence: specifies the requirements for the possibility to store data generated by the

agents

• Licensing: specifies the requirements for ensuring the legality of the installation

• Configuration: specifies the requirements for the management of agent configuration

• Logging: specifies the requirements for the possibility to store log messages generated by

agents

• User interface: specifies the requirements for interaction with operators

• Ontologies: specifies the requirements for integration of ontologies in the agent system

• Record & Replay: specifies the requirements for debugging of agent communication

• Recovery: : specifies the requirements for recovery of the system after a failure

For non-functional requirements we considered following aspects:

• Performance: the response time for the communication between agents

• Interoperability: the ability to run in inhomogeneous IT environment

• Availability, Reliability, Recoverability, Robustness, Fault Tolerance: stability of op-

eration

• Usability: ability to integrate new functions in the system

• Extendibility, Modifiability: ability to add new agents at run-time

• Testability: ability to perform automated tests

• Deployment and Commissioning: requirements on installation procedure

• Maintainability: requirements on documentation and programming conventions

• Security: ability to encrypt the communication between external agents

• Scalability: ability to handle a growing demand on work

The results of the requirements analysis has been compiled and can be found in the Appendix 2.

27

Task 1.6: Performance measurement

The aim of this task is to define performance indicators which allow measuring the efficiency of the

solution and estimation of the benefit. To do so a meeting with plant experts has been performed.

It has been decided to measure the performance of the application using indicators defined in two

axes, Information Technologies and Business.

• IT indicators: refer to performance measurements focusing on technological aspects of

the solution, this includes:

o Response time to find a solution

o Availability of the application

o Total availability of the application in percentage

o Mean time between failures

o Resource efficiency

CPU usage

Memory usage

Network load

Impact on external systems (databases, MES, ERP)

• Business indicators: refer to business benefits providing by the solution

o For the use case “Alternative thickness”:

number of coils affected by I2Steel system per month

percentage of coils affected by I2MSteel per month

mean time needed for the reallocation

o For the use case “Reallocation”

number of tonnage proposed by I2MSteel

number of tonnage reallocated

28

29

2.2.2 Work package 2: Development of overall, system and application architecture and

development of semantic model

Task 2.1: Creation of technical environment

The goal of this task was to define and to setup the technical environment required for the devel-

opment and implementation activities. Generally, this task can be subdivided in three following

subtasks:

• Selection of agent platform:

• Selection of development languages and environment

• Selection of project management system

Selection of agent platform:

For the efficient development of the I2MSteel multi-agent system, the consortium evaluated

whether an existing framework was suited to implement the above abstract layout. Three frame-

works were taken into closer consideration:

• JADE, a freely distributed Java based agent platform,

• JIAC, the agent system of DAI laboratory in Berlin and

• D’ACCORD, an agent platform developed by SIEMENS Corporate Technology utilizing

the .NET technology.

SIEMENS, BFI and CSM conducted an in-depth requirements comparison for both the general de-

velopment of the agent framework and for the realisation of the use cases as described above. The

comparison table is enclosed to the Appendix 3. During a point-by-point comparison, the consorti-

um found that beneath meta-classes for agents and their communication, the market place - as

very central element of the I2MSteel general concept - was found to be readily available in the

D’ACCORD platform. The latter provided all necessary basic objects for a quick development of the

framework.

Selection of development languages and environment:

Due to the selection of D’ACCORD as the agent platform which is based on .NET framework, C#

has been selected as the main target language. However based on the requirements analysis per-

formed within WP1, support of other languages or frameworks might be necessary in order to fulfil

the constraints (such as different operating systems, different interfaces, security issues etc.) of IT

landscape at the plant. For this reason we decided to use C# for internal agent development but in

cases where the usage of other languages or frameworks becomes mandatory, e.g. for the devel-

opment of external agents, the appropriate language and development environment will be ap-

plied.

In relation to this decision we selected Microsoft Visual Studio as development environment. Addi-

tionally we performed a consolidation of coding conventions between partners and established a

rule set for the source code design. This rule set includes specifications for notation of variables,

formal code design and best practice for coding. To support the developers to comply the conven-

30

tions we decided to use the commercial tool Resharper which integrates capability to check auto-

matically the conventions in Visual Studio.

Regarding the SOA architecture for the services framework and protocols, the standard SOAP and

REST protocols over secured HTTP connections have been selected. The more detailed description

of technologies and their analysis is given in WP4 section.

Selection of project management system

The aim of the project management system is to provide users support for collaborative software

development which allows that the contributions are gathered, assembled and tested in an integra-

tion environment. In order to select a suitable system we investigated the requirements according

to the following main requested features:

• Configuration Management (ability to track and store changes in developed software as

well as in reused software components and libraries), taking into account integration with

Integrated Development Environments (IDE), traceability of performed changes, cost and

availability (time to obtain tool and ease of deployment).

• Exchange of large data files, to handle large data sets for tests or files required by software

development environments (ex: ISO image of DVD), security being a mandatory feature.

• Availability of a wiki to document software developments. Mandatory features are export

capacity, for example in PDF, in order to produce portable documentation, access security

and availability of a notification mechanism to alert users of changes or new contributions

• Availability of key features in the software development environment, defined as such:

o Compatibility with SCRUM development process, which is the methodology that was

chosen for the management and the software development processes of the project.

o Link from the software development tool with Requirement Engineering tools

o Integration of an automated software build tool, to allow frequent tests

o Availability of a bug tracker to report and trace issues.

Based on those requirements and on experience of each partner two systems Redmine and Mi-

crosoft TFS (Team Foundation Server) have been selected as most suitable candidates. A compari-

son of features of these systems covering the software development process has been performed

and is summarized in Table 1.

Syst

em

Docu

men

tatio

n In

tegr

atio

n/

gene

ratio

n, re

porti

ng

Test

p

lann

ing

inte

grat

ion

Auto

mat

ed s

oftw

are

build

tool

SCRU

M

sup

port

Wik

i

Plug

in A

PI

Revi

sion

Con

trol

Syst

em

Inpu

t In

terfa

ces

Larg

e fil

e tra

nsfe

r

Bug

Trac

king

31

Redmine

Yes, integrated

wiki, discussion

forums, news

blogs, email

integration,

calendars,

Gantt Charts,

export to PDF,

export to Ex-

cel/CSV

Yes

Yes

Via

Plugin

Yes

Via

Plugin

Yes

Yes

Git, Mercu-

rial, Ba-

zaar,

Darcs,

CVS,

Subversion

Web,

Email,

REST,

Mylyn Yes

revision

control,

plugin

support

Yes

Team

Foundation

Server

Yes - workflow

definitions,

process docu-

mentation

Yes Yes Yes Sharepoint

wiki, other

plugins

Yes Git,

Bazaar(via

plugin),

Subversion

(via Svn-

Bridge),

TFS ver-

sion control

Web,

Email,

CLI,

SOAP,

Visual

Studio

Yes

with

revision

control

Yes

Table 1. Feature comparison

At the end we decided to use both systems. Although our preferred solution was MS TFS due to the

bigger functionality scope, better integration in the Visual Studio and dedicated support of SCRUM

process but because of too high cost for the full installation we decide to use reduced version for

the main functions as code maintenance and to apply Redmine for other missing features as for

document management and wiki.

Both system has been installed on the CETIC IT infrastructure and are accessible to the partners.

Task 2.2: Definition of the development process

Different perspectives have been taken into consideration for the definition of the most appropriate

development process. First of all, the development group is scattered across different countries in

Europe and the second, the project is a research project and not an industrial one. Therefore start-

ing from Siemens experience, the development team decided to adopt an agile process based on a

mixture of the Rational Unified Process (RUP) (specifically OpenUP) and SCRUM.

RUP is an iterative software development process in which the project life cycle is considered as

consisting of four phases: Inception, Elaboration, Construction and Transition. The objective of

inception is the scope definition and schedule estimates. During the elaboration phase a problem

domain analysis is made, i.e. use case definitions, and the architecture of the system gets its basic

form. The construction phase is dedicated to the implementation of the system and finally the tran-

sition phase is devoted to put the system in production.

SCRUM is a way for teams to work together to develop a product. Product development using

SCRUM proceeds in small steps. Each step builds upon previously achieved results. Building prod-

32

ucts one small piece at a time encourages creativity and enables teams to respond to feedback

more quickly. The course of the project can be steered more precisely. In other words SCRUM is a

simple framework for effective team collaboration on complex projects.

In a dedicated workshop this standard was applied to the specific conditions of the project. The

overall project phases i.e. inception, elaboration, construction and transition are split into SCRUM

iterations. The outcome is a list of tasks and corresponding deliverables that were aligned with the

whole team. Several roles have been identified merging the necessity to integrate the RFCS project

management inside the agile process adopted. The most important are:

Product Owner

The PO serves as an interface between the team and other involved parties (stakeholders). He

makes technical requirements and prioritizes them. In I2Msteel the PO’s are the Working Package

managers.

SCRUM Master

The SCRUM Master manages the process and is accountable for removing impediments for the

team´s ability to deliver the product goals and deliverables. In the first year the SM was provided

by Siemens and now this role is filled by CSM.

Development team

The Development Team is responsible for the product development deliverables at the end of the

sprints.

The SCRUM methodology is normally adopted on a collocated team with daily meetings. In order to

adopt it in our project with people located in several European countries weekly meetings, organ-

ised using web and call conferences have been adopted.

During the execution of the project some adjustments have been necessary to better fit the neces-

sity of a research project: e.g. 4 different smaller subgroups dedicated to specific WP have been

identified and manage their own weekly meeting independently, while once a month a general

meeting is scheduled in order to share the obtained results, determine the activities for the next

month and make strong-weak analysis.

During the inception phase several use cases using the Cockburn templates have been defined as

already described in Task 1.1.

To support the SCRUM activities we selected two systems: Redmine and Microsoft TFS (Team

Foundation Server). This decision was based on the requirements shown in Task 2.1 “Selection of

project management system” and on the individual experiences of each partner. The first one –

Redmine – is mostly used to exchange documentation not directly joined to application develop-

ment, whereas the TFS is highly integrated in the SCRUM process and in the software develop-

ment: e.g. a concept for data management, a configuration management concept, mechanisms for

traceability are a vital part of the TFS installation.

Moreover in TFS the status model for requirements is implemented, the build management is in-

stalled, and the development structure is defined and installed.

33

At the end, this combination of RUP and SCRUM enables an efficient steering of the project on con-

dition to put in place adaptations detailed in the previous chapter.

Versioning concept

Because of the distributed developing teams a versioning concept was created as shown in Figure

6. The red lines represent the work carried on independently by each team, while the green line is

the complete project. Each time a team reaches a consolidated version it is released in the main

line. The black line represents the global released version. This behaviour is managed inside the

TFS.

Figure 6. I2MSteel versioning concept for software development.

Test Concept

A test plan has been defined based on the well-known V-model (see Figure 7). The system testing

examines all the components and modules that are new, changed, affected by a change, or needed

to form a complete delivered system (all holons and agents). Where integration testing tries to

minimize outside factors, system testing requires involvement of other systems and interfaces with

other applications. The complete system must be tested end-to-end.

Figure 7. V-Model of software testing

Task 2.3: System and application architecture definition

34

In the course of WP2 the consortium partners already defined the development process strategy

and the technical environment for this development. Task 2.3 sets the focus on the software archi-

tecture of the I2MSteel system and the later application. Both are closely coupled to the require-

ments analysis performed in WP1, in which we thoroughly worked out the fundamental functional

and non-functional aspects of a solution for our demonstration plant examples.

Figure 8. General architecture of the I2MSteel system

Figure 8 shows a general architecture of the I2MSystem integrated in the infrastructure of the plant.

According to this scheme the solution executed within the I2MSteel Agent Framework has a univer-

sal nature that means the algorithms defined by the agents can be developed independently from

the configuration of the real plant. The information about the configuration of the plant is stored

within the ontology and is managed by SMW (semantic media wiki). The connection to the process

databases is regulated by the SOA (service oriented architecture) realising a unified access to the

plant IT. On this way the adaptation of the whole solution to a new plant requires only the adapta-

tion of ontology and SOA access points.

Evaluating the IT environments in steel industry, the majority of deployed operating systems are

various flavours of Microsoft Windows. The requirements analysis in the previous tasks straightfor-

wardly required the I2MSteel software application to support the latest editions of this OS. Our

primary application will be a Windows based executable and our implementation language will be

C#.NET. As shown later in this section, we also support other OS and languages with our concept

of external agents.

Starting from the actual use cases we can generally foresee autonomous agents becoming engaged

either independently or in cooperation with other agents. Consequently an agent may wish to pro-

vide or request a specific service. In common multi-agent systems, agents are informed about the

35

presence and availability of other agents by an instance that handles both the service specifications

and addresses of the corresponding agents. Following these thoughts, we arrive at a very decisive

component of the I2MSteel system architecture: the so-called broker. SIE brought in strong expe-

rience here with regard to object based managing components. Agents have to register with the

broker, announcing potential services they offer. Contrarily, whenever an agent requires a specific

service, it can contact the broker and ask him to provide the address of an agent offering the de-

sired service. Summarizing, the broker acts as a coordinating and mediating instance which man-

ages the agents and their skills.

The broker is not coupled to a specific use case. Instead all use case solutions will utilize this com-

ponent. Physically speaking, the broker represents an installation of the I2MSteel software system.

Furthermore, following our requirements specification, we design the broker in a way, so that mul-

tiple brokers can communicate with each other. The latter is important to support the exchange of

agent messages even between different plants. Please refer to Figure 8 for an illustration of this

connection.

Figure 8 also depicts the architectural solution of the I2MSteel project for handling the inclusion of

external components. As we usually expect a heterogeneous IT infrastructure at the steel plant –

e.g. given by slowly grown brown field environments – one requirement for our system is to flexi-

bly connect to so-called external agents. Such external agents are installed beneath the existing IT

subcomponents of relevance for our use cases, e.g. the Level-2 system. From here, they are regis-

tered to a broker representing the I2MSteel main installation. The exemplary service realized in

Figure 8 is a data base agent connected to an external data base. The introduction of external

agents in our architecture – as shown by Figure 8 – is crucial. It allows installing agents on practi-

cally any operating system, implemented in an arbitrary programming language. Connecting to the

broker (which has to reside on a computer running a Microsoft Windows OS) and becoming an

I2MSteel agent just requires exchanging messages in the right format. The nature of the agent is

factual irrelevant. Here, CETIC developed possible representations of the external communication

in form of secure services.

36

Figure 9. Global architecture of I2MSteel system

Agent platforms have common rules about the communication between agents. In our case the

communication is the backbone of the solution as well and becomes therefore a central element of

the architecture. Registrations with the broker, request of services, transfer of messages in general

and later on negotiations at the market place depend on a common language. In accordance with

common agent languages we defined a specific interface, the Agent Holon Definition Language

(AHDL) to standardise the communication throughout the I2MSteel landscape. AHDL strongly sup-

ports our methodology to use ontologies to adapt a set of agents to a plant and represents the

communication core which remains completely independent from any plant specific dialect. Trans-

lating the plants dialect into the I2MSteel nomenclature will be the task of the semantic agents

later shown in Task 2.4 and Task 2.5.

In the Appendix 5 we present more details on AHDL. At this stage, we would like to explain the

described abstract architecture with a simple example. Figure 9 presents the sequence diagram of

a mail notificator agent which is being used by a Level-2 system. First, the mail notificator registers

with the broker and exposes its services. The Level-2 system (assumed to be registered already)

asks the broker to perform a service search in his directory. After finding the mail notificator, the

broker returns the address of the suited agent to the Level-2 system. Independently from the bro-

ker, the Level-2 system and the mail notificator enter a negotiation so that the Level-2 system can

use the mail notificator service.

37

Figure 10. Communication mechanisms of a simple example of a mail notification service.

Task 2.4: Investigation and evaluation of industrial standards regarding ontologies

A report has been done providing a comprehensive analysis and evaluation, in terms of domain

coverage, of existing relevant industrial standards in the manufacturing domain for products and

processes along the production line. The main focus of this report is to establish the basis for iden-

tifying and selecting relevant domain knowledge models that can be re-used and/or adapted for

describing the I2MSteel demonstration scenario.

The goal of this work package is to formulate a comprehensive semantic model (common lan-

guage) that covers all relevant aspects that need to be reflected to establish the basis for seamless

information exchange in order to optimize the global steel production workflow. The formulated

semantic model will define a precise and at the same time generic representation of the existing

high-level knowledge of the steel production domain. For doing so, we require the combination of

multiple ontologies that address partial aspects of the I2M Steel use case scenarios.

Several ontologies, standards or application scenarios can be (partially) reused or used as inspira-

tion for engineering a generic semantic model for the steel production. In the cited report, it is

outlined the state of the art of industrial standards, ontologies and accomplished application sce-

narios that will be of interest and relevance for the I2MSteel project. It is described existing indus-

trial standards that cover the used concepts and terminologies required for seamless information

exchange between the involved systems and applications along the overall steel production work-

flow. In the third section of the cited report, it is described three related research approaches that

established and implemented an ontology for the steel production domain. The report finishes by

providing a rough overview of existing enterprise ontologies covering concepts and terminology of

generic enterprise processes as well as foundational ontologies establishing meta-models that help

to integrate various knowledge models in an integrated manner. The whole report it attached to

the Appendix 6.

38

Task 2.5. Development of a semantic model by combination of all necessary manufacturing ontolo-

gies

The overall goal of this task is the domain ontology development for the steel manufacturing do-

main. This domain model aims to describe all relevant concepts needed to describe the selected

use cases within the I2MSteel project as well as provide the foundation for inferring implicit

knowledge by means of reasoning mechanism. The work for this task relies heavily on the result of

Task 2.4 by reusing conceptual ideas as well as domain knowhow we could gather by the analysis

of available related standards in the steel manufacturing domain.

Our underlying conceptual modeling approach which is described in the following subchapter makes

use of a concept model to provide a common way for engineering experts to express knowledge

about a steel plant. The main task of this approach is to specify the usage of the provided concept

model to allow for the integration of several partial knowledge models defined by various domain

experts who use different tools and standards.

The main benefit of ontologies for the I2MSteel project (as well as the steel manufacturing domain)

is the fact that we establish the basis for the development of plant-independent solutions by

providing the vocabulary for defining the structure of the steel plant, the underlying processes and

the steel product in a formal manner. For each plant, a dedicated instantiation of the structure and

process model describes the specific plant configuration. In addition, Semantic Agents can access

the semantic model via a Sparql Endpoint.

Thus, the developed solutions can easily be reused within other plants by simply adjusting the de-

scription of the plant configuration. For instance, the implementation of the “Alternative Thickness

Functionality” can be transferred to another plant as any changes are captured within the semantic

model, which can be accessed by the agents.

The Semantic Modeling Approach

We identified three different roles of domain experts in the steel manufacturing areas as target

users of the semantic model as shown in Figure 10:

Figure 11 Three different expert roles

1. the knowledge engineers specifying the partial steel plant models for various discipline-

specific aspects of a steel plant in collaboration with the engineering experts,

39

2. the suppliers (manufacturer, third party supplier or service supplier) specify abstract infor-

mation about their equipment, e.g. characteristics about devices, and

3. the plant engineers (plant owner, engineering expert or operator) construct, operate and

maintain the plant.

Figure 12 Modeling levels of the concept model in correspondence to the MOF modeling levels as

defined in (OMG, 2004)

Thus, we support all different roles in a plant by using a four layered metamodel architecture to

structure the plant knowledge (see Figure 11). This metamodel architecture is described in several

standards, especially the most prominent standard Meta Object Facility (MOF) Core Specification

1.4 defined by OMG [14]. The key modeling concepts of the MOF are type and instance, and the

ability to navigate from an instance to its metaobject (its type). We distinguish the following lev-

els :

• Level M3: The concept model specifies general Concepts that can be used by the software

engineers to define their partial steel plant models. We choose these concepts in accord-

ance to principles of object-oriented design since software and especially knowledge engi-

neers are familiar with them. The three basic concepts used for modelling at this level are

as presented in Figure 12.

• Level M2: The knowledge engineers that specify the partial plant models on this level de-

scribe or discuss specific concepts and terms with experts of different engineering disci-

plines. The resulting models are called metamodels and can be used by level M1 and M0.

The objects that are used by the authors are called metatypes, e.g. an engineer defines the

metatype. As M2 models cover very generic aspects of the steel manufacturing domain,

they are also labelled as foundational model

• Level M1: The suppliers, e.g. the manufacturers of components, store general information

on abstract Types in their model such as specifications of their products in a product cata-

log. The resulting model of the M1 level is called type model and used by the M0 level. For

example Siemens describes the Motor Siemens 1FK7 and its specification details in a data

sheet.

• Level M0: On this level, plant engineers define a concrete industrial plant with instances of

the types of level M1. The resulting model is called instance model. The plant engineer

40

specifies this model when he installs components of the suppliers in his plant and stores all

component-specific information and the relations between the components, e.g. Motor m1

is an instance of Motor Siemens 1FK7.

Figure 13 Concept model with all levels and models

The benefits of this conceptual model approach can be summarized as follows:

• several metamodels are derived from the concept model in a unified way and can be re-

lated with each other

• design decisions fixed in the concept model allow for an automated verification of the

user models with regard to their conformity with the concept model

• for any domain-specific aspect of an industrial plant, a custom metamodel can be intro-

duced to allow for domain-specific customized navigation and visualization of plant

engineering data.

Exemplary Foundational Models (M2 Models)

For the semantic modelling of the steel manufacturing domain, we identified six M2 or foundational

models:

• Structure Model: capturing any information to the structural aspects of the steel plant by

describing the hardware components, roles and interfaces.

• Measurement Model: capturing and information of relevant measurements of the steel

production process by describing measurements, roles, and locations.

• Storage Model: describes how measurements are stored and how the data can be ac-

cessed.

• Order Model: describes how technical and commercial order information are handled and

where those are stored

41

• Product Model: describes the product-related information in a very abstract manner. By

distinguishing different product types (and thus product templates), such as melt, slab,

coil, those product types can be characterized by a list of attributes that will be of rele-

vance /captured in the life cycle of the product. The design of the product model is inspired

by the Production definition model of the ISA-95.

• Process Model: describes the processes executed in a Steel Plant.

Figure 14 Overview of the six foundational models of the steel manufacturing domain

As the structure and the measurement model are very central to our modeling approach, we will

describe them in this report in further detail. For the documentation of the remaining four founda-

tional models we refer to our semantic media wiki installation which encompasses the processable

knowledge model as well as their documentation and examples.

The Structure Model

The Structure Model describes the Steel Plant Topology, which includes the Partonomy. Partonomy

is a type of hierarchy which deals with part-whole relationships in contrast to taxonomy whose

categorisation is based on discrete sets.

The main concepts of the Structure model are hardware components, roles and interfaces.

42

Figure 15 Simplified Overview of the Structure Model

Hardware Component

• A component describes a single component or composite components in a steel plant

• A component can be composed of other components, then it is a composite hardware com-

ponent; the relationship ‚has part’ is used to define the structural composition of hardware

components

• Component instances can be physically connected to other components e.g. a motor to a

conveyor belt. This is the “connected to” relation in Figure 14.

• A component can relate to an industrial process activity instance; the relation ‚associated

with’ is used to define the alignment between structural components and process activities

• Some examples of Hardware Components are: Production System, Finishing Mill, Sensor,

ERP, Customer DB, Steel Plant MES etc.

Role

• Role is a concept that describes an abstract functionality without the underlying technical

implementation

• Examples: particular function of a tool, function for the allocation of energy, resources, or

material, quality control, or storage management.

• Within our future steps, we are planning to develop a taxonomy of roles

Interface

• An interface is a concept that refers to a point of interaction between two (hardware) com-

ponents.

43

• It is applicable at the level of hardware and software components (however currently we

focus on the modelling of the hardware elements).

• Although, there exist several types of interfaces, such as software interface indicating data

/ information flow, signal interfaces, production interfaces, within our modelling approach

we will focus on the description of interfaces between (hardware) component, i.e. compo-

nent that are physically installed in the plant

Measurement Model

The measurement model allows representing all measurements describing the product states and

properties during the progress of producing the steel product. A measurement can represent a

sensor or a particular measurement of a lab describing the chemical characteristics of the steel

(intermediate) product. We do not represent the measurement value itself but provide information

from where and how the I2M Steel agents can access the described data.

The main concepts of the Measurement Model are measurement, Measurement Role and Location

Figure 16 Simplified overview of the Measurement Model

Measurement

A Measurement describes a single or complex measurement of the properties of a product in an

industrial plant. It is either a Measurement Instance or a Measurement Template.

Although we do not describe this explicitly, measurements can be simple and complex.

• simple measurements refer to a measuring unit, such as an sensor, that consist of one unit

only

44

• complex measurements refer to several measuring units, such as the ordered sequence of

width sensors positioned along the finishing mill, that are bundled as one measuring unit

by using the 'part-of' and 'before' relationship. In such a case, the measuring units being

grouped to a complex measuring unit will be described without specifying the location.

Measurement Role

The measurement roles describes the particular role of a measurement, for instance we will distin-

guish the following roles:

• Temperature Role

• Thickness Role

• Roughness Role

• Flatness Role

• Lab Result

• Etc.

A measurement might have several measurements roles, for instance a combined sensor unit that

measures not only Height but also Ripple or Flatness values e.g. of the heavy plates.

Location

The location indicates the hardware component instance from where the measurement is captured,

e.g. the location of a particular sensor.

In order to describe the precise position of the location, we will use the attributes "horizontal posi-

tion" which can have the values "before", "after", "inside" and "vertical position" which can have

the values "upper", "lower" and "na" (not applicable or irrelevant).

Model Example: Hot Rolling Mill

In this chapter a concrete application example of the developed ontology is presented. To analyse

the ability of our developed approach to represent the necessary hardware components, involved

processes and related measurements we choose the “alternative thickness” use case for the first

exemplary implementation of our ontology.

In general the “alternative thickness” use case represents a type of production incidents which

occurs during the hot rolling step. It assumes that a certain slab cannot be rolled to a desired

thickness because of some previously happened disturbances. Such disturbances could be e.g.

deviations in the chem. analysis of the slab, wrong temperature or machinery dysfunction. In these

cases the slab has to be rolled in an alternative thickness. The solution for this use cases should

determine the best alternative thickness which produce the minimum of follow-up costs.

In order to describe the structural components of this use case within our ontology we assumed

configuration of the hot rolling mill as presented in the Figure 16.

45

Figure 17 Overview of the structural component of a hot rolling mill

Description of the Hot Rolling Mill

We chose the hot rolling mill as representative part of the production in a steel work in order to

show ability of our model to describe the relevant components and processes. Initially a short

overview of the hot rolling mill is given before we start to present the modeling.

In a hot rolling mill slabs are rolled to coils of strip. Typically, slabs have a length from about 5000

mm to 12000 mm, a width from about 500 to 2000 mm and a thickness of about 200 to 300 mm.

They are rolled to strips which have a thickness from about 1.5 mm to 15 mm.

The slabs are coming from the continuous casting plant or a slab repository into the walking beam

furnace. The walking beam furnace is a reheating furnace where the slabs are heated to a temper-

ature of 1.200° C. When leaving the furnace slabs are covered by scale which has to be removed

by a descaler. Then the slab is rolled to a prestrip (thickness between 22 and 35 mm) in a roughing

mill. The roughing mill consists of a reversing four-high stand and a vertical edger for controlling

the width of the prestrip. Then the prestrip is rolled into a coil in a coil box. The use of a coil box

has two effects: first, it allows a compact design for the complete hot rolling mill as the finishing

mill can be positioned much closer to the roughing mill as in a mill without coil box. Second, a

coiled prestrip holds its temperature much better and more evenly than a prestrip which is

stretched out. Afterwards, the prestrip is descaled again and then is rolled to the final strip in the

finishing mill. The finishing mill consists of several four-high stands (here we use five). Then, the

strip is cooled down on the run-out table consisting of the roller table itself and the laminar cooling.

Finally, the strip is rolled into a coil in the coiler. Between these components the slabs resp.

prestrips/strips are transported on roller tables.

Structure Model for the Hot Rolling Mill

46

A simplified logical overview about the structure is given in Figure 17. Here as well as in the model

we have omitted the roller tables as their only function is transport. The Run-out Table has to be

there because it is coupled with another function: laminar cooling.

Walking Beam

Furnace

Descaler 1

Roughing Mill

Finishing Mill CoilerDescaler

2Run-out Table

Hot Rolling Mill

has part

Vertical Edger

Horizontal Reversing

Stand

Four-High Stand

1

Four-High Stand

5

Four-High Stand

4

Four-High Stand

3

Four-High Stand

2

LaminarCooling

Exit Roller Table

has part has part has part

Coil Box

Figure 18 Overview about the structure of the hot rolling mill

After having understood the mode of operation of a hot rolling mill it is possible to do the modeling.

We model the M2, M1 and M0 level. On M2 level there are the generic types (Hardware Component

Template) of the components. On M1 level we have chosen to use an abstract manufacturer (xyz).

Here the specific types (also Hardware Component templates) are used. On level M0 we find the

concrete instances.

Here the type already describes the function of the component. So roles can be modeled nearly

parallel to the types: e.g. the hot rolling mill has the role “hot rolling”, the roughing mill has the

roll “roughing mill”, the finishing mill has the role “finishing mill”, a descaler has the role “descaler”

and the coiler has the role “coiler”. Some components are specific types with a more generic role,

so the walking beam furnace has the role “reheating furnace” and a four-high stand has the role

“rolling stand”. This simple role model is possible because every component has exactly one func-

tion. This is simply due to the fact that our model does not yet cover process-related information in

large detail. We assume that by integrating additional process-related concepts and knowledge, the

modelling of roles will become more complex. If for example one sensor measures multiple values

as temperature and thickness of strip etc. this sensor will have several roles: “Measure Tempera-

ture” and “Measure Thickness”.

As an example we take a closer look at the rolling stands and the finishing mill (see Figure 18).

47

M 0

HorizontalReversing

Stand

M 2

Vertical Edger

Rolling Stand

Four-HighStand

Generic Types(HardwareComponentTemplates)

Instances in the specific

hot rolling mill (HardwareComponentInstances)

Vertical Edger

component type component type

subtype subtype

M 1Specific Types

(HardwareComponent

Templates from Manufacturer)

Vertical Edgermanufacturer = xyztypenumber = ve1

subtype

HorizontalReversing

Stand

manufacturer = xyztypenumber = ve1serialnumber = v12

manufacturer = xyztypenumber = hr3

manufacturer = xyztypenumber = hr3serialnumber = h07

subtype

Four-HighStand

manufacturer = xyztypenumber = fh6

subtype

Four-HighStand 1

manufacturer = xyztypenumber = fh6serialnumber = f22

Four-HighStand 2

manufacturer = xyztypenumber = fh6serialnumber = f95

. . .Four-HighStand 5

manufacturer = xyztypenumber = fh6serialnumber = f77

component type

Figure 19 Example for types and instances

Here we see that the type Rolling Stand has two subtypes: Vertical Edger and Four-High Stand. In

fact, the Rolling Stand could have other subtypes as well, e.g. a six-high stand. But as such other

subtypes are not used in our context there is no need to model them. The ability to work in reverse

has been modeled by an attribute of the M2-type Four High Stand. In the implementation the M2

level corresponds to the namespace i2msteel, the manufacturer (M1 level) has the namespace xyz

and the M0 level the namespace am.hot rolling mill.

Other models (Measurement, Product, Storage, Order)

In the last paragraph we have looked at the basic structure of the hot rolling mill. For controlling

the rolling process other information is needed as well. We need data about the slab which is

rolled, the (technical) order which describes the expected result, measurements about the current

state and the information where to get the data. To illustrate this, we will look at particular detail

(see Figure 19) of our extensive modeling example that is fully described within the I2MSteel Se-

mantic Mediawiki.

48

Product:HotStrip

Measurement: Temperature1

Temperaturehas role

Location:FT1

Finishing Millafter

has location

has measurement

simple Sensor: IMS Temprefers to

Measurement: CreationTime

Sensor: Clock

CreationTime

refers to

has role

Measurement: HotStripID

Sensor: Process Computer

Identifier

refers to

has role

String:HotRollMill

String:HotStripNumber

String:HotStripNumber

MeasurementOntologyStorage:

IdentStorage

ontology

mappedclass

predicate

identifier

http://i2msteel.org/AM/Databases/QualityDB.rdfs

Figure 20 Example for Product, Measurements and Storage

Here we see the interplay between product, measurements and storage. A product has a lot of

measurements. We have chosen to model time values and the product ID as measurement to have

a unified approach to the product attributes. The ID of the product is provided by the process com-

puter which is marked as sensor in this context. (This is an example where one Hardware Compo-

nent has several roles: firstly the process computer has the role of a sensor and secondly the role

to control the rolling process.) Measurements are stored in some kind of repository, e.g. simple

files or databases. In the example we see a special type of storage: a MeasurementOntologyStor-

age. This is a subtype of both MeasurementStorage and OntologyStorage. MeasurementStorage

simply indicates that the storage holds measurements. OntologyStorage describes how the data is

stored and how it can be retrieved. Here it holds the information in a ontology. To be independent

from proprietary databases we decided to use OntologyStorage as a generic replacement for Data-

baseStorage. In fact the database schemas can be easily converted in simply ontologies by apply-

ing so called “table to concept” approach. That means that the tables within a database will be

represented by classes or concepts within the ontology and the columns by relations or predicates.

Concluding the attribute “ontology” identifies in our model the database and its schema. “mapped

class” identifies a table in the database, “predicate” a column in this table. “identifier” is also a

column in the table representing the key identifier for the value (e.g. slab id)

The additional hardware components which are needed here have to be added to the structure

model. Figure 20 shows all sensors from the measurement example added to the structure of the

hot rolling mill. It has to be noticed that there are no hardware components for storage. Storage is

not hardware itself, it may reside e.g. on the process computer or servers external to the hot roll-

ing mill.

49

Finishing Mill

Hot Rolling Mill

has part

Four-High

Stand1

Four-High

Stand5

Four-High

Stand4

Four-High

Stand3

Four-High

Stand2

has part

Sensor: IMS

TEMP1

Sensor: IMS

TEMP2

Sensor: Clock

Sensor: Process

Computer

Sensor: IMS

TEMP

... ...

Figure 21 Exemplary sensors in the hot rolling mill

Task 2.6 Integration of external and implicit knowledge sources into the semantic model

As already described in the previous Sections, our conceptual modeling approach establish the

basis for semantically mapping knowledge from different levels of expertise into one comprehen-

sive perspective, the I2MSteel semantic model. In order to make use of the explicit as well as im-

plicit available knowledge dedicated inference mechanism are required. This is provided by our

infrastructure (which we will describe in further detail in description of Task 2.7) by means of a

formal verification mechanism for querying and verifying constraints on plant engineering data. A

number of wiki modules handle the communication with external tools and knowledge technology

components, e.g. a Triple Store Connector (TSC) that allows using a triple store with advanced

querying and reasoning functionality. With such a linked triple store, the engineering experts can

define queries in SPARQL (SPARQL Protocol And RDF Query Language) syntax, e.g. to identify

devices with specific attributes.

By means of such SPARQL queries, the implicit knowledge covered within related standards, such

as ESIDEL, ISO, or PSL) can be easily made explicit and, thus used within any envisioned semantic

application. For instance, the information about quality parameter provided by the standard ESIDEL

could be used to automatically cluster production orders.

Furthermore, the user-defined constraints on plant engineering data in the wiki can be validated by

reasoning over the inserted semantic engineering data.

As an example we look at the structure information for a Four-High Stand in the Finishing Mill. In

Figure 21 these are the “has part” and the “connected to” relations.

“connected to” is a physical connection between component instances. It is symmetric and transi-

tive. (Symmetric means: if component 1 is connected to component 2 than the other direction

holds too: component 2 is connected to component 1. Transitive means: if component 1 is con-

nected to component 2 and component 2 is connected to component 3 then also component 1 is

connected to component 3.)

50

Four-High

Stand1

Four-High

Stand5

Four-High

Stand4

Four-High

Stand3

Four-High

Stand2

has part

connected to connected to connected to connected to

FinishingMill

Figure 22 Relations in the finishing mill instance

In Figure 22 a screenshot for the Four-High Stand F2 is shown. The result of the SPARQL structure

query is seen below the heading “Plant Structure”.

Figure 23 Structure Query for a Four-High Stand

Here we see two results: the first shows the “connected to” relation to the next Four High Stand,

the second simply the meta type of this component. For the time being the results are limited to

entries which are defined at the instance itself. It is also intended to show the structure information

which is defined elsewhere e.g. the “connected to” from The Four-High Stand F1 to F2 or the “has

part” relationship of the Finishing Mill to F2. The automatic deduction of this is under progress.

Task 2.7 Implementation and Testing of the semantic model

We propose a semantic technology stack for instantiating our conceptual modeling approach. Se-

mantic Mediawiki (SMW) forms the basic infrastructure of the technology stack and serves as col-

laborative engineering tool. While a conventional wiki includes structured text and untyped hyper-

links, only a semantic wiki is based on the representation of metadata elements which allows for

logical reasoning on wiki content. SMW is probably the most popular and mature semantic wiki

(Kroetzsch et al., 2006). It relies on the same wiki engine as Wikipedia and uses constructs from

the Resource Description Framework (RDF) and Web Ontology Language (OWL) to support seman-

tic web features such as reasoning and querying.

Figure 23 shows the screenshot of the start-page of the I2MSteel Semantic Mediawiki

51

Figure 24 Start-page of the I2MSteel Semantic Mediawiki

SMW provides the means to handle all modeling entities of the various models in an information

system in terms of storage, visualization and editing. In the following subsection, we will detail

several features of our infrastructure.

Guided web-based editing

First of all, modeling concepts of all levels of the conceptual model (Entities, Relationships and At-

tributes) are represented by wiki pages. This enables users to create and edit structured plant

knowledge in the form of web pages which are stored in underlying RDF models. SMW provides

several ways for custom formatting (e.g. HTML, CSS) to be applied to pages. User guidance is pro-

vided by templates and an additional extension `Semantic Forms" that allows engineering experts

to create and edit plant engineering data using predefined page styles. Semantic forms are pages

consisting of markup code which gets parsed when a user goes to add or edit data. We use seman-

tic forms for all concepts on level M3 and M2 to guide the editing of pages used by editors of type

or instance models. An example for a semantic form for Hardware Component Templates is shown

in Figure 24. It is defined by the knowledge engineers of M2 level and used by steel plant engi-

neers of M0 level. Several mandatory fields can be defined for all pages using this form. At the

52

same time, forms can use templates that allow to store queries or rules on the pages using the

forms.

Figure 25 Example for generation of Hardware Component Instance Finishing Mill

Additionally, the web-based architecture of SMW supports the collaborative editing of community

knowledge since web pages are accessible by all members of the community. Editing can be re-

stricted for certain user groups, e.g. authors of a metamodel or plant constructors, which allows for

clear separation of the modeling levels.

Semantic data representation

Semantic Mediawiki allows using constructs of RDF and OWL. OWL classes are defined by means of

category pages with the tag “Category:", individuals of these classes are normal pages. Pages in

SMW can be related via “property" pages which allows for entering semantic data in SMW. Entities

of the conceptual model are modeled as usual pages and Relationships and Attributes are realized

with SMW properties. Several special properties can be used to annotate pages, e.g. “Property:Is

inverse of" and “Property:Is Equal To" can be used for mapping equivalent discipline-specific con-

cepts on each other. To relate entities on the four modeling levels with each other, we defined sev-

eral property pages, e.g. “Property:meta.concept.entity type".

When we implemented the model library in SMW, we decided not to use categories, but to store all

entities and attributes as individuals. This decision was made because the levels M2 and M1 contain

concepts that are instances and classes at the same time. An example are the User Types of level

M1 which are instances of the Metatype concepts of level M2 and serve as well as classes for the

instances on level M0. This is equivalent for level M2. Thus, we do not distinguish between classes

and individuals and model all concept on the SMW instance level. Relationships are defined as SMW

properties and allow to distinguish between the four modeling levels.

53

Figure 26 Mapping of concepts of the Concept Model to SMW concepts

SMW offers import and export of ontologies. Engineering experts using the wiki can export selected

plant engineering data via the page “Special:ExportRDF" by entering a list of articles into the input

field. The export will contain an OWL/RDF specification with various description blocks for exported

elements which can then be used by industrial automation systems. An engineering expert can

import an existing industrial OWL ontology into the wiki meta-model using a script that translates

OWL into wiki syntax.

Adaptable navigation and visualization

The infrastructure offers means for interlinking plant engineering model entities. It is thus possible

to navigate in the interlinked structure of the models and to visualize the entities on web pages

which can be adapted by the users. The engineering experts can distinguish the different levels of

the conceptual modeling approach with namespaces of the pages in SMW. Every page URL starts

with a specific namespace. The name of the page is separated with “.” from its namespace. All

concepts of the conceptual model are stored in the namespace “meta.concept” and all metamodels

have their own namespace starting by “meta”. The namespaces of the type and instance levels are

chosen by the users, e.g. the namespace for our model steel work has been chosen as “sagunto”

(representing the steelwork of ArcelorMittal at Sagunto). Here we recommend to use sub-

namespaces for each production line, e.g. “sagunto.cold rolling mill” for the cold rolling mill and

accordingly “sagunto.hot dip galvanizing line” for the hot dip galvanizing line. In addition there may

be generic namespaces for the domain. Here we use the namespace “i2msteel” for generic types

etc. for the steel domain. The pages with namespace “meta” can be reused for new plants.

All objects in the SMW can be identified by their specific page Id. These Ids correspond to the page

URL and are unique identifiers. Ids have no empty spaces and can thus be used in queries or rules.

The property “Property:meta.concept.id” is used to specify the Id of a page.

In order to generate an easy to use user interface with intuitive navigation and a clear structure we

developed a special page design for the final users. The goal was to provide a convenient experi-

54

ence to the users which focus on presentation of most important information’s by reducing the

meaning less. The Figure 26 shows an example of the page structure.

Figure 27. Example of page design

Integration of Ontologies and Agents

Within our concept we to followed to the recommendation of the FIPA which suggest integrating

the ontologies by providing so called “Ontology Services”. In this approach the access to the exter-

nal ontology server by internal agents is organised through a specialised agent who is providing the

ontology services. The benefit of this solution is that the internal agents can use the same commu-

nication mechanism to access the ontology as for the intercommunication between agents. Further

the ontology agent can implement additional functionalities such as translation service or provide

the agent community knowledge about relationships between the different ontologies. Figure 27

shows a general scheme of the integration approach. Here the sematic agent provides the func-

tionality to access the ontology stored within Semantic Media Wiki to other agent within the agent

platform.

55

Figure 28. Integration of ontologies

In order to establish the communication to the Tripple Store we utilized the public domain library

dotNetRDF (MIT license model). The library offers interfaces to the most of popular triple stores

including also 4store which has been used for the demonstration installation. The usage of this

library enabled us to implement the semantic agent independent from the selected triple store.

Figure 28 shows a screenshot of the semantic agent running inside the D’ACCORD agent platform.

We developed also a simple user interface for the semantic agent allowing performing the semantic

queries on the knowledge repository manually.

Figure 29. Screenshot of semantic agent implementation

56

57

2.2.3 Work package 3: Agent Holon Development

Task 3.1 Functional analysis for agent/holon description and development of general approach of

agent/holon system

Starting from the functional requirements defined in WP1 a deep analysis of situation at the

demonstration example plant was performed. The use cases presented in WP1 were investigated

and the two most urgent scenarios of the industrial partner were identified to be:

• Alternative Thickness

• Reallocation

The first use case “Alternative Thickness” addresses a local issue in the steel production of the

demonstration plant in Dunkirk.

In use case “Reallocation” a much larger scope is regarded. Leaving the horizon of a single plant is

of great industrial interest of AM. Therefore we expand the application of the I2MSteel system to a

division wide inter-plant reallocation of products to existing orders.

With the assistance of AM, the following details of functional solutions for the demonstration exam-

ple plant in Dunkirk and the Business Division North were worked out:

Functional analysis of use case “Alternative Thickness”

In this use case, the example plant fails to roll the specified thickness of an order. A decision

whether the target thickness can be reached or not, is done by the Level-2 system considering e.g.

the temperature distribution behind the roughing mill. Let h denote the thickness. If the achievable

minimum thickness hmin is outside the tolerance margins [h1, h2] of the original order, commonly

hmin> h2, the order cannot not be fulfilled, consequently triggering our use case. The I2MSteel solu-

tion is now expected, while using hmin and an acceptance window of time ∆t as input, to deliver an

alternative value for h. The request is send after the slab left the roughing mill and before it enters

the finishing mill.

With regard to this scenario, production and order data of the example plant Dunkirk was carefully

examined by BFI and CSM. The use case was found frequently and Figure 29 features a view on

one exemplary day production, where the use case appeared. In this thickness diagram, the slabs

are shown with respect to the point in time they were rolled. The height of each bar represents the

rolled thickness h. Slabs failing their target thickness hTarget are plotted in black where the red ex-

tension denotes the actual rolled thickness of the product. If the actually rolled thickness coincides

with the target thickness of the original order, the slab is marked as green. Most of the slabs are of

course in this category. This type of diagram helps to quickly browse through the production data

and find the occurences of our use case.

58

Figure 30: An example for a Dunkirk day production schedule.

The functional solution path of the I2MSteel system is twofold:

1. A suitable value for an alternative thickness h is first based on the evaluation of historic da-

ta. To do this, CSM and BFI designed a system concept in which production data is stored

continuously. It estimates the most-likely thickness a customer would eventually order,

with respect to a set of underlying constraints as e.g. steel type and width. This so-called

historic database reflects the needs of the buyers and can even be adjusted to specific

months to include the impact of seasonal changes in the order types.

2. In an independent second step, the order book of the MES is scanned for orders that neatly

fit the characteristics of the current slab e.g. in terms of width and steel type. In case such

an order is found, it would allow for a fast reassignment of the slab. Of course, this sec-

ondary solution is the preferable one, because it corresponds to a real order. But the net-

work communication to the MES is known to be slow at the example plant, with answering

times up to multiple minutes. This could be longer than the given time interval ∆t in which

the user must have an answer from the I2MSteel system, which is actually the amount of

time it takes for the slab to travel from the roughing mill to the finishing mill. Therefore,

even if MES had an optimum result in terms of a real order, a first decision has to be done

evaluating a historic database in (1).

We have investigated different approaches in defining the agents necessary to fulfil the require-

ments and while these definitions will be presented in more detail in the description of Task 3.2, let

us here shortly point out what functions those agents should have:

For solution path (1), the most robust configuration applies a specialised agent for evaluating his-

toric data. This agent will be continuously fed by data from the Level-2 system.

59

In particular for solution path (2) the local reallocation based on the MES order data can deploy a

bargaining approach within a marketplace. Bidding agents represent orders in the MES and a sell-

ing agent sells the current slab as item. This approach is extensible in the sense that the same

rules of the marketplace can be used in different plants and new additional providers of bids can be

added without modifying the remaining agents. Not only a straightforward relation to the use case

Reallocation emerges, in fact the market strategy is exactly the same. The functions and, later

shown in Task 3.2, the modular agent design are completely reusable by design, being in agree-

ment with our global goals specified in Task 1.1.

Functionality analysis for use case “Reallocation”

In this use case an already finished product misses the product specifications of its original order.

For the sake of simplicity we will refer to scenarios in which – again – the target thickness could

not be achieved. Judging on the data from the example plant, this is also the most realistic scenar-

io. The partners developed a distributed market concept in which the I2MSteel framework is de-

ployed to check whether a different order exists that fits onto the given product. This concept is not

only used within a single plant, it extends the agent based solution and transfers it to the scope of

a whole business division. The market strategy, already fundamentally used for extracting an alter-

native thickness value from the MES in the previous paragraph, can be reused to do this.

Here, available orders act as bidders in an auction for the product. Together with assistance of AM,

the partners BFI and CSM developed and applied a metric on steel production data that was neces-

sary to allocate a virtual price to the orders. Several order parameters like steel type, width, thick-

ness, chemical composition or delivery date are put into this metrical function and are evaluated. It

determines a numerical value that expresses how well an order matches onto the product, which

was rolled with the wrong thickness. From this value we can derive a virtual price that a bidding

agent, who is representing one single order, is willing to pay for this specific item. Three different

outcomes of the negotiation exist:

a) There is no solution. It means that no order is close enough to the product so that a reallo-

cation can be successful.

b) Multiple orders could be accepted, the I2MSteel system proposes the order with the highest

price, because this is the best matching option. In this case, also multiple solutions with

the same price might occur. Here, acceptable means that the product falls into the toler-

ance margins of the customer without being a perfect match.

c) There are one or more exact matches. All candidates are returned by the I2MSteel system.

The following specific functionalities are required:

• Access to a division wide order book is needed to actually represent orders by bidding

agents.

• A marketplace is essential to enforce the rules of negotiations and auctions.

• Agents representing orders must calculate their price expectation.

Let us briefly summarize the essential messages for both use cases again:

60

For the use case Alternative Thickness, we apply a historic database to provide a first guess for h

and in a secondary step a marketplace negotiation is performed with bidders representing the or-

ders from the MES and where the current slab is acting as selling item. The MES result has higher

priority, but if it is not reached in a specific time ∆t or if there is no solution, the result from the

historic database is propagated to the user.

For the inter-plant Reallocation, we use the same negotiation approach as before in Alternative

Thickness, but now, the orders originate from the order books of different plants within the busi-

ness division.

Task 3.2: Implementation of the agents and holons abstract description

In Task 3.2 CSM, BFI and SIE derived a general, modular structure to guide the development of

agent systems using class based implementation strategies. As seen in the description of Task 3.1,

there is a diversity of agent functionalities needed, while most of these agents will share a common

set of basic operations. In such modular systems it is usual to apply the concept of inheritance,

allowing e.g. to let highly specialized agents inherit a set of shared operations from a so-called

meta-agent class, procedures which are rather basic but mandatory for each agent. Secondly,

functional agents derived from this meta-agent class represent functionality required within a spe-

cific use cases. The partners also defined a strategy by which existing agent technologies are uti-

lized to realize solution concepts for both the framework and the use cases.

General approach

Because autonomous agents and, more precisely, systems of these agents play such a central role

in the I2MSteel project, we would like to shed more light on these objects and provide some deep-

er definitions of our nomenclature here.

An agent is a computational program that proactively performs a sort of action and from the devel-

opment perspective, the nature of such programs straightforwardly induces a class or object ori-

ented implementation paradigm. The starting point is an abstract class called entity, which contains

two attributes, a name, an identification number (ID) and a concept of time. The latter reflects the

fact that the use case solution underlies time constraints as given by the strict scheduling of alter-

native thickness or the reallocation of a product within a given time.

In more detail, the project wide notion of an agent is as follows: an autonomous agent is an entity,

which (a) underlies a defined set of constraints, (b) has an internal state (typically called utility),

(c) is able to autonomously take actions to change its state and (d) can communicate with other

agents. Being the main channel of propagation, (d) is certainly the most important aspect of a

MAS, any solution is achieved by the combination of communicative action and the reactive re-

sponse of the rest of the system.

Agents can only communicate when they know about the presence of other agents, so there needs

to be a programmatic unit that collects information about all available agents in our system. The

consortium agreed on using the term broker for this unit and decided that any agent of the

I2MSteel MAS has to register with the broker. SIE shared its strong experience with object manag-

ers from former sensor surveillance projects, here. During the last year it has been decided to en-

hance the functionality of broker in order to manage the registration and the search for specific

services exposed by an agent.

61

Within this context, SIE, CSM and BFI defined the meta-agent class as follows: the meta-agent has

a name and an identification number inherited from the entity, an internal state and all necessary

functions to communicate. The latter includes the concept of a language and abilities to register

and unregister with the broker that has to be aware of all agents in the system.

Whenever agents form a group to solve a problem this group is called a coalition of agents and if

the coalition is the only possible instance to solve the problem, such a coalition becomes holon. In

other words a holon is a combination of differently skilled agents. A set of functional agents com-

prises a functional holon.

As depicted beforehand, the communicative action - or more specifically negotiation - is important

for the solution path of any MAS. BFI, SIEMENS and CSM identified a further concept to be of

mayor importance for applying a MAS framework to the use cases in steel industry: the market-

place. This was already discussed in the functional analysis presented in Task 3.1. A marketplace in

the context of the I2MSteel project is a programmatic institution wherein negotiations can efficient-

ly take place and which enforces the inherent rules of the negotiation process. This can be more

easily understood when we look onto auctions as a specific subclass of negotiations. There are dif-

ferent types of auctions, each with a very specific and occasionally complex procedure of bidding

requiring a regulating body that guarantees the rules of the auctions are met. The market place

allows agents to open up a new negotiation, informs other agents that they can take part in this

process and finally handles the communications and bids. For that purpose, other agents can regis-

ter with the marketplace beforehand for being notified regarding negotiations of a given type.

Functional design

This series of definitions is the basis for the I2MSteel agent framework and is regarded as shared

functionality among all use cases. Next we will explain how the functional concept of Task 3.1 is

implemented using agent technology. Leaving the rather abstract definitions behind, we now intro-

duce our understanding of a so-called functional agent: Functional agents are agents actually per-

forming a specific action within a use case. A functional agent hereby inherits the properties and

basic functionality of the meta-agent, but extends these shared parts by a set of properties and

actions that are required to operate within the use case. Accessing a database, semantic querying,

data conversion and providing even simpler operational services like storing and loading data snip-

pets are just a few examples of such actions. In the same sense we use the term functional holon

for a holon that performs a specific action. Note that not all contributing agents of a functional ho-

lon must necessarily be functional agents.

Similar to the presentation in Task 3.1 we will now specify the functional agents corresponding to

the two selected demonstration examples:

Functional agents for the use case “Alternative Thickness”

The functionality for the use case Alternative Thickness as described in Task 3.1 can be implement-

ed by an agent system. As previously described our broker platform registers and manages the

agents and following agents will be involved: the Level-2 system, the Alternative Thickness agent

A1, the historic database agent A2 and the Level-3 system agent A3. An example of a typical regis-

tration message is given in Figure 30, whereas Figure 31 shows the formal registration sequence to

62

the broker platform and the service request of the Level-2 system. In step 9 of this sequence, Lev-

el-2 ask the platform for available services provided by the available agents and the platform an-

swers that an alternative thickness service can be requested by Level-2 when needed. All partici-

pants are now registered and can ask the broker about specifically skilled agents within the sys-

tem.

Agent Registration Message

ID 1

Name “Alternative Thickness”

Description “Searching for alternative thickness in

production history and current order

book”

Location “HotStripMillNo1”

Address 192.42.227.168:6100

Figure 31. Registration information of an example agent.

One agent, the alternative thickness agent A1, handles the procedure as such and waits for re-

quests from the Level-2 system. The Level-2 system knows from a request with the broker in Fig-

ure 30 that an Alternative Thickness service is available and that agent A1 offers it. An example for

the communication of the Level-2 system with the broker is given in Figure 32, yielding the request

to get the address of an agent that offers the service “Alternative Thickness”. The following series

of actions and reactions is detailed in Figure 33. If the latter detects a slab not achieving its target

thickness, the Level-2 system asks the agent A1 for a solution. A1 opens a new auction at the

marketplace in which two other agents can place their bids: the historic database agent A2 and the

MES agent A3. The bid of A2 is placed quickly and the auction remains open for a specific duration

∆t. Note, that the time limitation, explicitly required by our functional analysis, is a natural charac-

teristic of traditional auctions. Also A3 can place a bid, which might also be favoured by A1. If A3

does not deliver in the time ∆t, A2 is the only bidder and becomes winner.

Please note that continuously, the Level-2 system is reporting current production properties of

slabs to the historic database agent A2. This is necessary to build up the database and also shown

as partial sequence in Figure 31.

There are different types of holons present and we intentionally kept the term agent as it helped to

understand the actions on a higher level. One abstract holon that emerges quite naturally is the

bargaining coalition in the marketplace with agents A1, A2 and A3. No single agent could provide a

usable output alone, here. Moreover, A2 is in fact a holon. It consists of other agents which provide

basic functionality. A2 contains at least one persistency agent, handling the storing and loading of

data into the historic database. To do this, it will eventually need the help of a semantic agent that

translates the Level-2 information into the internal language of the I2MSteel system.

63

Figure 32. Sequence diagram for the registration of agents with the broker platform.

Request service

ID 101

Name “GetAlternativeThickness”

Description “Request from the broker the address of

an agent that provides the service Alter-

native Thickness”

Location “Level-2 System”

Address 192.42.227.112:6100

Figure 33. Example of a service request to the broker, here committed by the Level-2 system.

E1 Level2 A2 Historic DB PlatformA3 Level3 (MES)A1 Alterantive Thickness

1 : register()

2 : ID := ID Return 3 : register()4 : ID := ID Return

5 : Register()6 : ID := ID Return

7 : register()

8 : ID := ID Return9 : search service()

10 : List of ServiceInfos := ALT_THICKNESS Infos

64

Figure 34. Sequence diagram of the agent interplay for use case Alternative Thickness.

Results of the multi-agent implementation

SIE provided means to reconstruct orders from the Level-2 system and to interpret data of their

Level-3 system architecture. AM granted access to the historic order data in the MES system at the

Dunkirk plant. Using these available data resources, a functional prototype of a multi-agent system

was established and tested successfully by CSM and BFI. In Figure 34 we present some exemplary

results of these agent based evaluations. The historic database result is given in terms of a histo-

gram. It shows a maximum number for a thickness of 2.0 mm, which is expected. Nevertheless, a

bad temperature distribution along the slab, measured after the roughing mill, prevents the plant

to roll the slab to thicknesses smaller than 2.7mm. These can in principal no longer be reached.

The I2MSteel system evaluates the historic data and places a bid representing 3.5 mm as best

possible product thicknesses in the marketplace. In various configurations we tested scenarios

where either the result of the historic database remained the only bid and due to the strict match-

ing constraints no further order with same width and steel type could be found in the MES, or sce-

narios where the MES came up with a correct future order that could by dynamically reallocated. In

Figure 35 we utilize the same visualization technique as in Figure 29 again; Slabs marked in green

were rolled as ordered. Black and red indicates the slab could not reach target thickness, where

black is the original target h and red indicates the factual deviation from the order. Now, blue

marks a potential entry in the MES order book, which could have been reallocated. Both the slab

data and the order data match perfectly with respect to all required dimensions, e.g. steel type,

width and analysis.

E1 Level2 A2 Historic DataBaseMarket Place A3 Level3 (MES)A1 Alterantive Thickness

1 : request services ALT_THICKNESS()

2 : open auction for alt thick()3 : Informs about opened auction()

4 : informs about opened auction()

5 : alt thick bid

6 : alt thick bid7 : suggested akt thick

8 : Thickness := Return Thickness

9 : inform on each produced coil

65

Figure 35. Histogram of thicknesses rolled with a width of 1343cm and a specific steel type (7675)

with respect to two months. The steel type designation follows the common practise of the partner

AM. Thicknesses below 2.7mm were no longer achievable.

Figure 36. MES based reallocation with a future order in a reconstruction of historic data from the

demonstration plant in Dunkirk. The x-axis represents the time and the y-axis represents the

thickness h. The width of each bar represents the total duration needed for the rolling process.

66

Figure 37. Mindmap of the agents/holons required for use case “Alternative Thickness”.

From the experiences with the prototypical implementations a mindmap is shown in Figure 36,

summarizing all objects required by use case “Alternative Thickness”. Colored nodes represent the

functionality as described by Task 3.1. This concludes our treatment of the use case “Alternative

Thickness” within WP3.

Functional agents for the use case “Reallocation”

With experiences of the previous use case regarding bidding and selling within the marketplace, we

could establish the inter-plant reallocation in a straightforward way. We reused the metrics de-

scribed above for this completely. Again, AM provided historic data from the steel production and

CSM and BFI constructed a rapid-prototyping and validation framework for multi-agent market

negotiations while SIE focused on the software technological aspects of our functional design.

The use case is triggered by selecting a dislocated coil. Some parameters of this product failed to

meet the specified tolerances or it was overproduced and the original order had to be mandatorily

reassigned. In the virtual perspective the deficient coil becomes a product item on a virtual market.

67

Figure 38. Transfer of a misrolled coil into the reallocation process.

An agent, called failproductAgent, is in possession of the item and wants to sell it on the I2MSteel

marketplace. In Figure 37 the transfer of a coil to an item associated with a fail product agent is

presented. With the item, the fail product opens up a new auction in the marketplace and waits for

incoming bids.

After being informed by the marketplace about a new open auction, order book agents of the

plants define a group of future orders to take part in the auction. Orders that are currently pro-

cessed in the plants are considered as out of range, only those orders for which the production

process has not started yet are of interest in the auction. Each plant sends its selected group of

order items to the marketplace.

At the marketplace, order agents and the fail product agents exchange and negotiate about the

details of their items. The order agents calculate an index allowing final user to judge how well

each offer item specification would fit onto the given product.

In Task 3.1 we presented a list of expected negotiation outcomes and this list was verified by the

results of our prototypes. We found few situations, where no order was close enough to the specifi-

cations of our coil so that no solution existed, but this scenario mostly appeared for very specific

order configuration that do not belong to the customer mainstream production of the plant. Once

choosing a defect where the final defect product is considered being still in a majority of product

specifications, the conditions change: orders for these products appear far more frequently with

very similar configurations. Therefore, the probability to find a suited order for reallocation to the

coil is much higher. Based on the historic example data, we reconstructed multiple cases for which

the reallocation was highly beneficial. Work on exact quantisation of the benefits, especially with

respect to the commercial impact is on-going and carried out in strong cooperation with the indus-

trial partner AM.

68

Figure 39. Mindmap of agents/holons for the use case “Reallocation”.

Our description of the functional agents for the use case “Reallocation” is concluded with a brief

overview in Figure 38, showing a mindmap of the involved objects. Note, not all processes can be

effectively displayed by such a diagram. Nevertheless the structure of the market place concept

and the modularity can be seen very clearly. Any arbitrary plant can contribute new agents to the

market place. In the given demonstration example of the I2MSteel project we are just considering

the AM Division North here. The negotiation architecture is highly flexible and through the use of

the semantic agents we are able to map any plant specific language or database architecture di-

rectly into our system. We are therefore in line with our global goals stated in Task 1.1.

Going into more detail of the implementation in the selected platform for the Reallocation use case

the following agent/services have been identified:

Agent Provided Services

OrderAgent: calculate the match between a

failed product and available orders and estimate

the reworking and reallocation costs for finished

product orders. Generates also bids on the

failed products based on calculated information

MatchingHotOrders: calculate the match for

intermediate products (e.g. hot strip)

MatchingFinalOrders: calculate the match for

finished products (e.g. hot deep galvanised

coils)

OrderDBAgent: provides order information RetrieveHotCoilOrders: provides order infor-

69

mation of intermediate products

RetrieveFinalProductOrders: provides order

information of finished products

FailProductAgent: provides the failed prod-

ucts and manage the market place and negotia-

tions for submitted products

GetFailedProducts: provides a list of failed

products

GetSubmittedProducts: provides a list of failed

products which has been placed on market

place for reallocation

FailProductDBAgent: provides failed product

information

RetrieveFailProducts: provides failed product

information

ProximityAgent: calculates the match be-

tween a product and an order based on me-

chanical characteristics

CalculateProximityIndex: calculates the proxim-

ity between one product and one order

CalculateMultipleProximityIndexes: calculates

the proximity between one product and a list of

orders

CalculateCostIndex: calculates reworking costs

for a failed product in order to match a order

FinalProductAgent: estimates if a failed prod-

uct can reach the mechanical characteristics of

an order by applying additional treatment. For

the estimation local linearized surrogate pro-

cess models are applied.

CalculateFinalThicknessRange: estimate the

possible thickness range which can be achieved

for a product.

CalculateFinalProductMechanicalCharacteristics:

estimate the range of mechanical characteris-

tics which can be achieved for a product

Semantic Agent: provides the access to the

ontology

ExecuteQueryAsString: Execute a query on the

triple store and returns result as a string.

ExecuteQueryAsJSON: Execute a query on the

triple store and returns result in JSON format.

Moreover a library has been implemented to enable the connection between the Agent platform

and the Service Oriented Architecture for accessing the databases (see description of WP 4 for

more detail).

SOAUtils Library Provides general access to SOA

DB Service library Provides DataBase access via SOA

In order to illustrate the typical interaction between the agents and the platform an example is

given in Figure 39 showing a registration message, whereas Figure 40 presents the formal registra-

tion sequence to the platform and the service request by other agents. In the first step the Or-

derDBAgent register a service called RetrieveHotCoilOrder at the platform by sending the registra-

tion message shown in Figure 39. Now this service is accessible for other agents.

70

Further in step 5 of this sequence, another agent called OrderAgent asks the platform for available

services provided by the available agents. The platform answers with the list of all the agents im-

plementing the requested service i.e. RetrieveHotCoilOrder exposed by OrderDBAgent.

Agent Service Registration Message

Name “RetrieveFailProducts"

Type "DBAccess"

Ontology "link to ontology if required"

Language "English"

Access AccessEnum.BOTH

Scope "global"

Ownership "I2MSteelConsortium"

Figure 40. Registration information of an example agent.

Figure 41. Sequence diagram for the registration of agents and services with the platform and ser-

vice utilization.

Using this approach in the current use case leads to following procedure: One agent, the Fail Prod-

uct Agent, starts the reallocation process searching for a service that provides a list of the available

products for reallocation (RetrieveFailProducts service). Then it can ask the agent providing this

71

service, in this case FailProductDBAgent, to provide the corresponding service (i.e. receiving a list

of available products). The final user can then select one or more of the available products. The Fail

Product Agent will now open a new negotiation for each of the selected products.

The broker notifies all interested agents (in particular the OrderAgent) that a new Negotiation is

open. Two main types of reallocation can be foreseen: as-is or with reworking.

As-is reallocation:

The OrderAgent will now check the availability of orders for the product as-is by searching for Re-

trieveHotCoilOrders service. Then it asks the OrderDBAgent to provide the service (i.e. receiving a

list of orders more or less compliant with the selected product). A new service (CalculateProxim-

ityIndex) is now searched by OrderAgent. It then asks ProximityAgent to evaluate a level of com-

pliance (between 0: not compliant and 1:fully compliant) between each product selected and each

order retrieved. In this way a list of possible orders for reallocation is provided to the final user

ordered by compliance.

Figure 42. Sequence diagram of the agent interplay for the use case Reallocation as-is.

Reallocation with reworking:

This type of reallocation needs some more steps as presented below:

72

1. calculate the prediction of mechanical characteristics for the final product,

2. verify the compliance of the calculated characteristics to available orders,

3. evaluate the costs necessary for reworking.

Algorithm for mechanical characteristics prediction

We present a principle to re-allocate a product on an order which requires a process that will modi-

fy the mechanical characteristics of the product. The aim of this algorithm is to make a prediction

of them. We detail a case where we would like to re-allocate a hot rolled coil onto a galvanized

order. A similar method can be applied to re-allocate a slab onto a coil order or a full hard coil onto

a galvanized order.

A model developed by AM is used to determine the value of YS, UTS as well as derivatives towards

process parameters.

Depending on SteelGradeFamilyModel, Thickness of the coil, the type of the line and the Width a

first evaluation of mechanical characteristics is done with typical process value (e.g. Reduction

rate, Annealing temperature, Skinpass elongation,…). SteelGradeFamilyModel indicates which type

of model has been used to evaluate the mechanical characteristics with standard process values.

Based on these results and AM knowledge, stored in specially developed tables, a range for the

final thickness can be evaluated.

Those orders compliant to the first evaluation of mechanical characteristics and hypothesized

thickness range can be now selected.

Once the final thickness is identified, a better evaluation of both Yield Strength (YS) and Ultimate

Tensile Strength (UTS) can be evaluated using the foreseen reduction at the Cold Stripp Mill and

estimation of annealing temperature and Skin Pass elongation according to the final Thickness.

Also for this purpose specially developed tables based on AM knowledge are used.

The compliance with the chosen order can now be evaluated.

In Figure 42 a sequence diagram of the implementation in terms of agents and their relationship

can be found.

73

Figure 43. Sequence diagram of the agent interplay for evaluation of proximity for reworking prod-

ucts.

Proximity Search Engine (PSE)

The Proximity Search Engine covered by proximity agent has been developed in order to implement

the calculation of the proximity and cost index. The proximity index is needed to evaluate the

matching between the fail product to reallocate and the active order to satisfy.

PSE exposes to I2MSteel platform three services, these are:

1. CalculateProximityIndex: this service is called to calculate the proximity index of one order

and one fail product

a. Input: a Json serialized dataset composed by two data tables called order (con-

taining order data) and product (containing fail product data). The two data tables

have only one row

b. Output: string containing proximity index value

2. CalculateMultipleProximityIndexes: this service calculates proximity indexes of one fail

product and several orders

a. Input: a Json serialized dataset composed by two data tables called order (con-

taining order data) and product (containing fail product data). Product data table

has only one row, order data table may have multiple rows

b. Output: a Json serialized dataset containing one data table (ProximityIndexes)

with three columns: OrderNumber, ProductNumber, ProximityIndex

3. CalculateCostIndex: it calculates cost index of one fail product for one order

a. Input: a Json serialized dataset composed by two data tables called order (con-

taining order data) and product (containing fail product data). The two data tables

have only one row

74

b. Output: a Json serialized dataset containing one data table (CostIndexes) with two

columns: ruleName, costIndex

Proximity Index

Proximity index is a float value between 0 and 1 representing the matching degree between the fail

product features and order requirements. Its calculation is made through a dedicated xml file

(called comet_sel_rules.xml) representing company rules for product and order matching.

The rule of the proximity index are of two types:

• Matrix Operator

<ProximityRule id="proxCategory"> <DecoratedNode id="Category" /> <MatrixOperator style="identity"> <Attribute attr="CR" /> </MatrixOperator>

</ProximityRule>

This rule is of type 0/1. If the Decorated Node parameter of the fail product fit the required

value the proximity index has value 1 (i.e. product and order are compatible), otherwise 0

(impossible to reallocate the product to the order).

• Trapezoidal Operator

<ProximityRule id="proxN"> <DecoratedNode id="n" /> <TrapezoidOperator infMax="0" infMin="0.5" mode="relative" />

</ProximityRule>

This kind of rule is typically applied to fail product characteristic parameters and it gives a

range of allowed variation of them, see next figure. This rule can be configured with two or

four thresholds min and max.

The proximity index of a defined fail product in relation to an order is obtained by multiplying all

rule results. The path of the xml rules file and the active rules are managed into the PSE configura-

tion file.

Evaluation criteria

In order to produce an assessment of the results generated within the reallocation use case we

developed special cost based evaluation criteria. These criteria enable us to sort the results based

75

on the estimated costs required for the reworking activities. As already mentioned we differentiate

between “as-is” orders and orders with reworking. For the “as-is” orders the evaluation mechanism

is not applied because the product can be reallocated without further treatment and therefore no

additional costs arise. Here the ordering is done based on proximity evaluation which is a measure

for conformity between product and order.

For the reworking orders we introduced different cost factors which represent the different aspects

of the additional treatment. At the end these costs are combined to a total cost, thus providing one

value and enabling the sorting of the results.

The foreseen dimensions of the cost evaluation are:

• Logistics: defines the costs which arise by the transportation of the products e.g. when

the treatment has to be performed on a plant different to the originating one.

• Treatment: this cost defines expenditure of the reworking activities.

• Administration: this cost defines expenditures for administrative tasks related to the real-

location process. Thus costs might by estimated and adjusted by the user.

• Client Risk: cost which represents the risk for a customer complaint

In order to provide a flexible mechanism for the cost evaluation, we decided not to limit the cost

factors to those described above. Instead we allow defining the cost dimensions as well as their

function within the configuration files which can be easily adapted by administrators. That means if

one wishes to add new cost dimensions or to adjust the calculation formula it is only required to

adjust the configuration files. For the technical realisation we utilised the .Net runtime compiler

which allows compiling source code during the execution of the application.

The configuration file is compiled by the PSE agent during starting phase. The file appears as fol-

lows:

using System; using System.Collections.Generic; using System.Reflection; using System.Text; using System.Threading.Tasks; namespace DynamicRules { public class MyRuleClass { public static double Rule1(double d1, double d2) { return d1+d2; } }

}

In presented example Rule1 accepts two input parameters and returns their sum.

Task 3.3: Agents / holons software development

76

Within WP3, Task 3.3 represents the transfer of our conceptual design into a software system. In

Task 3.1 we laid out the functionality needed to provide a solution for the two selected demonstra-

tion use cases “Alternative Thickness” and “Reallocation”. In Task 3.2 we worked out the agents

and holons technology to put our solution into practise. In this Task we develop the software

BFI & CSM established a series of agents for our target framework which is based on the Visual C#

language. These were checked and verified against the prototypes with respect to numerical equiv-

alence. Based on the experiences with our algorithmic design studies the meta-agent base class

was set in C# together with most functional agents to construct the solution for the both use cases.

The software development is here strictly based on the requirements engineering process described

in WP2.

Use Case alternative thickness

The alternative thickness use case has been developed in order to test multi-agents architecture

flexibility to solve online control problems. Goal of this use case is the setting of the new target

thickness value for hot strip mill when e.g. coil input temperature is too low for the planned thick-

ness. I2MSteel architecture allows setting the new target thickness value though the negotiation

between level II and level III automation systems. The list of developed agents and their function-

alities for this use case is reported below.

E1Level2

This agent represents the level II automation system; when slab temperature is too low to obtain

the planned target thickness at Hot Strip Mill, user can request the calculation of the new thickness

value. User must set these parameters before the calculation request through the dedicated but-

ton:

• Min Target Thickness: minimum value of the thickness to calculate, this value is imposed

by temperature value of the slab and it is greater than the planned one

• Slab Length: length of the slab to roll at HSM

• Slab Thickness: thickness of the slab to roll at HSM

• Steel Class: grade of the slab steel

When user requests the Alternative Thickness, this agent sends a message to the

A1AlternativeThickness agent in order to open a new negotiation. This agent is typically an external

agent interfaced with plant level II automation system.

77

When negotiation will be closed, the suggested new target thickness value will be shown in the

Suggested Target Thickness textbox.

A1AlternativeThickness

This agent is the manager of the thickness value negotiation for the I2MSteel platform. It receives

the message from E1Level2 agent and opens a new negotiation for new HSM target thickness cal-

culation.

The participants of the new negotiation are the A3_Level3 and HistoricAgent agents; they will pro-

vide two different solutions based on order book and historical production. The negotiation is based

on a maximum time interval configured in A1AlternativeThickness agent

A3_Level3

This agent is typically an external agent interfaced with plant level III automation system. When a

new negotiation is opened by A1AlternativeThickness agent, it checks into the order database the

suitable target thickness value.

Order database simplified structure is reported in the next table:

The order is composed by:

• Product: code of the order product

• SteelGrade: grade of the steel

• CsgMinThicknessCoil: minimum allowed value of the coil thickness

• TargetThicknessCoil: target value of the coil thickness

Product SteelGrade CsgMinThicknessCoil TargetThicknessCoil CsgMaxThicknessCoil Day Year Month3037L 00270000 C9350 9.53 9.99 10.45 14 2013 93008V 00410000 C9350 9.68 9.99 10.30 23 2013 23011C 00620000 C9350 9.68 9.99 10.30 12 2013 3

78

• CsgMaxThicknessCoil: maximum allowed value of the coil thickness

The A3Level3 agent searches the minimum target thickness coil value greater than the Min Target

Thickness one sent by the E1Level2 agent with the same steel grade. The solution suggested by

this agent is related only to commercial issues, this is the best solution for production orders

matching. A quality index of the proposed solution is added to it, this index takes into account the

delivery time and the required coils quantity of the order. The obtained solution is submitted to

the relative open negotiation.

Historic Agent

The historic agent is the second actor of the thickness negotiation. It provides a solution for the

alternative thickness based on historical production of the plant. It works on a table of all produced

coils, see next table:

Historic table is composed by:

• SteelType: grade of the produced coil

• Thickness: value of the coil thickness

Numcoilproduced: number of coils produced for the relative steel type and thickness value

The solution suggested by this agent is based on minimum thickness value greater than requested

one with maximum coils number produced value. Quality index is added to the solution and it is

based on difference between requested and proposed target values difference.

The application of this use case to on line Hot Strip Mill control has been evaluated in terms of time

response features. The requirements of this use case established an overall calculation time less

than three seconds. The alternative Thickness must be proposed to final user before the slab ar-

rives in front of the Hot Strip Mill. May be that a full analysis of the order book is not feasible at

real-time.

Instead, the system needs to develop a partial result about the alternative thickness proposal.

Hence, the system has been made in such a way that it is able to give a first answer based on his-

torical data very quickly, and only if there is enough time to search for a specific order available on

MES systems.

Several tests have been made in order to quantify the response time of the I2MSteel platform.

The overall response time can be calculated as sum of three contributions:

1. Time for I2MSteel platform event propagation: this contribution is given by the manage-

ment of communication messages and by negotiation events propagation among agents.

This contribution is about 0.9 second.

SteelType Thickness NumcoilproducedC0365 2 4C0365 2.47 2

79

2. Historical data querying: this contribution is given by the querying of historical database. It

depends on complexity of production database and on automation system performances.

3. Level III data querying: this contribution is given by the querying of level III orders data-

base.

It is clear that for I2MSteel platform response times of existing plant automation systems are very

critical because they represent the main part of the response time. The maximum time value be-

tween level III and level II systems (two automation systems work in parallel mode) response

must be added to I2MSteel platform one to calculate the overall time. The overall time for online

installation has been estimated in typical plant automation condition of about 2.4 seconds.

When a material is des-allocated to its foreseen order, the MES refreshes the needed tonnage for

this order. Then this order is being processed as all other current orders; the MES looks for materi-

al in the upstream flow that could be allocated on the order and if not in case of urgency an expert

looks for alternative material in other plants.

Use Case Reallocation

For Reallocation use case development CSM & BFI implemented specific agents and Siemens im-

plemented enhancement of the platform. CSM was mainly focused on base services like DB access,

Proximity Search Engine and mechanical characteristic prevision while BFI was mainly focused on

order and product agents with their Graphical User Interface. The general algorithms and used

agents have been already presented in previous chapters. Therefore here the use case is presented

from perspective of the final user.

In general the reallocation use case starts when the FailedProduct agent is triggered by the user.

First of all the user asks the agent to show the list of the failed products which are waiting for the

reallocation. This done by clicking on button “Get failed products” cp. Figure 43.

80

Figure 44. User interface of FailedProductAgent

After that user can select one or more rows within the Failed products table and press the “Reallo-

cate” button. Thus products are then submitted to the market place and the reallocation can take

place.

The calculation of the matching order is done by the OrderAgent. OrderAgent is a consumer of the

market place and get automatically informed when new auctions are placed within the market

place.

81

Figure 45. User interface of OrderAgent

On the user interface the OrderAgent presents the available auction or products waiting for the

reallocation within the “Products waiting for reallocation” table. There user can select one of the

products and force the search for matching orders. Here exist two options the first one “Find

matching hot coil orders” search orders for intermediate products without the reworking possibili-

ties. The second one “Find matching final orders” searches for orders which can be served by the

given product after applying additional treatment.

The current user interface has been developed in order to evaluate the results provided by the

solution. In WP6 we intend to improve the user interface to more user friendly design and also to

uncouple it from the agent platform. That means that the user interface will be an external applica-

tion which communicates with the agent platform via WebServices.

82

Task 3.4 Integration and local functionality tests

In order to examine the developed solutions and to perform validity tests a simulation environment

has been installed. The simulation environment contains a fully integrated SOA stack including the

Service Functions, Service Description and Service Access management. The SOA architecture pro-

vides within the project the interface to the data. In order to simulate the data access under most

realistic conditions also a database with copy of real data originating from the ArcelorMittal plant

has been deployed. More details on SOA can be found within the description of WP4.

The simulation environment has been installed on a virtual machine at the cluster facilities of

CETIC and is accessible to other partners via the Internet. This allows performing the integration

functionality tests by each partner independently with relation to their own responsibilities.

The testing activities have been performed on different levels. Generally we differentiate between

those levels.

• Unit tests: Refer to tests performed during the coding in order to verify the functionality

of code sections (methods, classes etc.). Here each developer was responsible to ensure

the functionality of the developed code and to generate the necessary tests.

• Integration tests: These tests verify the functionality between independent compo-

nents, here agents and services. For this purpose a special test agent has been devel-

oped, which simulates the agent communication and is able to verify the results.

• System tests: Refer to tests that verify the functionality of the whole system. These

tests have been performed in collaboration between all involved partners. Special user

interfaces have been developed which provide additional intermediate results allowing

the analysis of the functionality.

• Acceptance tests: are done by final users and will be described in detail in WP6 section.

But to ensure the validity of the development we regularly exchange the results with the

end users, both the responsible people at ArcelorMittal and also their IT experts.

For the execution of tests we have used different tools provided by VisualStudio and TFS. For ex-

ample we have used the tool Resharper which checks automatically the compliance to the coding

guidelines we have defined within Task 2.2. For the Unit Testing we have applied the native testing

environment of Visual Studio in conjunction with the Team Foundation Server. In addition, the Vis-

ual Studio extension Unit Test Generator has been used by each developer in the team. In this

way, unit tests are developed and compiled together with the regular code development.

Additionally the Team Foundation Build Management is configured in a way such that all related

tests are run whenever an automated build is executed. Therefore an automated build is only suc-

cessful if both the compilation process as well as all related tests were completed successfully. This

mechanism ensures smooth and efficient collaboration between the distributed developer teams of

the I2MSteel project.

Regarding the system tests, as already mentioned, a full simulation system has been installed.

Some details on the installation can be found in next sections.

83

Data Services

For the purposes of Integration and System tests we have deployed the Products data from “Arce-

lorMittal Division North” and “ArcelorMittal Sagunto plants”. These datasets are called FosSurMer

and Sagunto respectively and the database is called PRODUCTS. We have also deployed the orders

data, also prepared and provided by ArcelorMittal. The orders are in the ORDERS database and the

dataset name is FosSurMer and Sagunto respectively.

Two DB technologies have been deployed, the MongoDB NoSQL database and the Aerospike NoSQL

database. They are both open source and available for free.

• The MongoDB hosts the ORDERS data that is bigger, and has good support for querying the dataset.

• The Aerospike hosts the PRODUCTS data that is a smaller dataset. Aerospike is quite fast at scanning through the whole data set and providing all data if needed. It has limited fil-tering capabilities and dataset can be distributed in several servers.

The SAL has an access control filter, all connections are encrypted using HTTPS and its services are

password protected.

MongoDB service

The data in mongoDB can be accessed using the following URL format:

https://i2msteel-

webservices.cetic.be/AgentStorage/api/ORDERS/Sagunto/?query={MaxWeight:{$gt:5512}}&fields={_id:0}

The presented example return entries with MaxWeight>5512 and does not return the mongodb

generated _id.

Aerospike service

The servlet based service for Aerospike allows interactions with the Aerospike database through an

HTTPS interface. A web page that can be used for requests to the servlet is provided:

https://i2msteel-webservices.cetic.be/aerospike/example.html

The servlet exports a REST interface, according to which the keys in the Aerospike database are

expressed as namespace/set/keyname triples. In order to fetch a particular key:

https://i2msteel-webservices.cetic.be/aerospike/client/<namespace>/<set>/<keyname>

Searching for particular entries in the database that match certain criteria is implemented using the

filter parameter. The query can also specify the bin (field) to display. The request format is:

http://<host>:<port>/aerospike/client/<namespace>/<set>/?bin=<field name>&filter=<field val-

ue>&allbins=true&typeoffilters=numeric

Integration with Database Agents

The DBService is a library that can be used to dynamically load the appropriate service client for

the web database service and run queries on the service. The library uses the properties file cli-

entproperties.json to identify the DBCLient implementation that can be used to access the service

84

based on the endpoint URL. The DBService uses the registry client to discover and retrieve the

service endpoint and service information. Discovery is performed using keywords for the Plant and

the service as described in the section of Task 4.1.

Service Registry

This platform provides an SOA interface to register and locate web services of the I2MSteel plat-

form or linked to or provided by the plants. This SOA interface is developed in Java using Jersey

and Java Servlet technologies and is deployed on a Jetty container (CETIC server). The register-

ing method allows registering a service thanks to its service description file. This file can either be

a WADL (description file used in case of a Restfull architecture) or a WSDL (description file used is

case of a SOA interface following the Simple Object Access protocol).

Four methods are available:

• The key method allows listing the keys of each registered web service. The keys are location

field and service name field.

• The lite method has the same functionalities than the key method but it adds extra fields.

• The endpoint retrieves the endpoint of the specific registered web service, which is identified

thanks to the keys provided (serviceName and locationName).

• Finally the full method will retrieve all data including the WADL or WSDL service description

document.

Ontology

As already described within Task 2.7 the ontology has been installed on virtual machine of CETIC

infrastructure applying the Semantic Media Wiki technology stack and using “4store” triple store as

back end. For more details please see the Task 2.7 description.

85

2.2.4 Work package 4: Services Development

Task 4.1: Implementation of Service abstraction layer

The Service Abstraction Layer (SAL) provides a collection of services and communication interfaces

using standard protocols, security and access control. The aim is to decouple the communication

infrastructure of the plant systems from the agent platform, enable interoperability and provide

components that are reusable.

The SAL architecture design process started with the collection and analysis of requirements for the

SOA services that the layer will support. This step was performed in close collaboration with the

software R&D team of the pilot plant. Following the conclusions of the requirement analysis the

Task 4.1 focused on the production of an architecture design for the Service Abstraction Layer and

an implementation in order to conduct tests and experiments for the evaluation and validation of

the technologies in regard to the requirements.

The analysis of the requirements, the description of the architecture and the conclusions of the

experiments with the prototype services are summarized in the following paragraphs.

Functional requirements:

• Abstraction layer for accessing external information sources in a unified, well-defined way

and interface. The services developed for the abstraction layer will use standard interfaces

and protocols, will be independent of the operating system and programming language and

will be re-useable.

• SOA interface for accessing Plant data sources and other user or platform information da-

tabases.

• Persistence services for storing user interface data, information about the platform and

platform historical data.

• Mechanisms and services for communication between platforms. Mechanisms should in-

clude authentication and access control.

• Ontology querying services and semantic translation services between ontologies for estab-

lishing interoperability in the information exchanges between plants.

• Security requirements. Secure communication using encrypted messages, logging and

monitoring of security mechanisms, access control of resources.

Non-functional requirements:

• Integration details such as the interfaces and protocols must be available and known at

compile time. Deployment of services should not interrupt the availability of the layer.

• Flexibility in terms of supported protocols. Plants may have some data services or other

application services deployed. The abstraction layer should be easily extensible and ready

to support the deployed services and in general most common protocols. The abstraction

layer will use industry standard communication protocols and tools for integrating with ex-

ternal systems.

86

• Securing of all services including access control service. Security must enable user authen-

tication and be able to verify authenticity of messages and access privileges of user’s ac-

tions on resources.

• Service registry, a component acting as the yellow pages of available services for the agent

platform.

• Availability and quality of services. Services should meet the agreed quality or availability

metrics and response mechanisms should be in place for when agreements are breached.

Following the validation experiments of the services, the integration phase began, during which the

architecture design and implementation of the SAL was revisited and refined. This section focuses

on the refinements of the architecture and implementation as well as on the results from using the

services.

Architecture design refinements and implementation

The Service Abstraction Layer (SAL) achieves information interoperability and integration between

manufacturing plants by using a distributed architecture paradigm based on a distributed, Service

Oriented Architecture (SOA) [2]. SAL focuses on a) enabling fast and easy to discover and access

data exchange facilities b) securing communication channels and promoting trust between partici-

pants, c) providing interoperability in the structure and semantics of data.

a. Information exchange services.

The SAL focuses on the integration of services by providing the protocols and interfaces for infor-

mation exchange, the QoS by providing security and logging and the infrastructure for making all

this available. The SAL provides components for handling persistence and transport of data be-

tween plant services and the agent platform; components for security and access control services

as well as persistence for quality of service; and a service registry for governance. Additionally data

interoperability services are also included in the SAL. The architecture diagram presents each layer

of the I2MSteel Platform and its role. The deployment diagram represents the setup and bounda-

ries of the I2MSteel system, its key components and interactions between them. The I2MSteel

agent platform provides the business logic and applications and services for the users.

87

Figure 46 SOA and Agent Platform layers architecture diagram

Figure 47 Deployment diagram of the SOA I2MSteel Services and the Agent platform

88

As demonstrated in the deployment diagram the widely used web service protocol REST 2 over

HTTPS3 are the building blocks for implementing the SOA Abstraction Layer. As these are estab-

lished protocols and architecture approach used in distributed systems, they allow seamless inte-

gration both with I2MSteel agents and services deployed in the plants. The following paragraphs

provide the architecture design and implementation details of each of the key information ex-

change components that are presented in Figure 46. The Security Service with its implementation

details is explained in the “Security Aspects” paragraph. The Semantic Web services are explained

in the data interoperability and semantics paragraph and finally the “SOA Client” agent details are

provided in Task 4.2 section.

The Persistence Services

The SAL uses persistence services for storing all types of information around the services for the

agents and plant provided services. Such data are logging of service usage for monitoring, service

descriptions for the service registry, caching services of data values from the plant databases and

last but not least it can be used as one of the plant data stores with actual plant data. The variety

of data types and in the ways that the data store is used led to the choice of NoSQL databases [13]

as the platform’s storage technology. The NoSQL databases that we have used are:

• MongoDB4, a document oriented database [12], has become the most popular database

system [11]. The Service Abstraction Layer requires a multi-purpose storage system that

makes a document oriented database ideal for the role as entries do not have to have a

specific schema and the type of values stored in them can dynamically change according to

the type of the data. MongoDb has a rich query language that allows complex queries on

the data. We have used MongoDB for the Registry (as described below), the ORDERS data-

base and for the PRODUCTS data sets. It has also been used for storing results of agent

queries for query caching purposes and for registering the reallocated products. The query

caching can help the performance of the system. The reallocated products registration is

needed by the reallocation use case to avoid double-reallocating of the same product.

• Aerospike5 is a key-value database [13]. This database has been chosen for its speed and

its distributed architecture. As tests have shown it is quite fast and especially efficient

in scanning the whole dataset from one or many distributed instances of the collection.

We have concluded that this feature is useful for the PRODUCTS data set, which is a

list of products that need to be reallocated. PRODUCTS sets are often scanned fully for

updating the products list in the product agent.

Although ideally both database types are recommended to be deployed, for facilitating the mainte-

nance and deployment of the I2MSteel architecture at the Fos-sur-Mer and Sagunto plants we have

chosen to use the MongoDB for the pilot plant deployments and only use Aerospike at our test-bed

deployment explained in the following paragraph.

2 http://en.wikipedia.org/wiki/Representational_state_transfer

3 https://en.wikipedia.org/wiki/HTTPS

4 http://www.mongodb.org

5 http://www.aerospike.com

89

For development and testing purposes we have deployed a Test-bed environment at the CETIC

cluster. The deployment of this environment is explained in detail in Task 4.3 paragraph of this

section. It consists of two data collections one for the orders, called “ORDERS” deployed on Mon-

goDB and one for the products for reallocation called “PRODUCTS” deployed on Aerospike system.

For both data collections two web services have been developed for providing a REST [3] interface

on HTTPS. This SAL deployment included an authentication mechanism of Basic type (username

and password)[1] and a separate access control service that implemented a PDP, while PEP was

installed for all requests to the server. As already mentioned the HTTPS protocol is used for the

communication with the service, which means that all request and response messages are encrypt-

ed. The service was tested using the LoadUI testing framework hosted at machine that was physi-

cally and in terms of network configuration separately located from the server machine. For the

tests five different web-service ‘runners’ were used (representing five agents) that were called in a

round robin order, at random intervals, 5 times per second, which translates to around 20 requests

per second for the server. The results of these tests are shown in the following figures (Figure 47,

Figure 48). The graph displays the maximum response times of the service for each of the ‘run-

ners’. The times range between 20 and 80ms for the Aerospike system and between 200 and

500ms for the more complex queries of the MongoDB system.

Figure 48 Aerospike database performance test results

Figure 49 MongoDB performance test results.

The Web Service Registry

The Service Registry provides the interfaces and functionality for searching, storing and retrieving

service description documents. The service descriptions provide the interfaces the endpoint and

metadata of the plant application and data services. The Service Abstraction Layer will use the

service registry to register plants and platforms and store technical information regarding their

operations. The registry supports two types of service description documents depending on the

web service type, SOAP or REST:

90

• The Web Services Description Language6 (WSDL) based document that is used for describ-

ing the interface and functionality offered by a SOAP[4] web service.

• The Web Application Description Language7 (WADL) is an XML description of HTTP-based

web applications (typically REST[3] web services)

The agents can use the SOAClient library to query for available services. The client library is con-

figured by default to use the registry for discovering available web services. This way the client

library retrieves the service description documents with all the technical specification for making

the service calls. Subsequently the web service client is constructed and the web services are in-

voked.

The service registry is developed using the Jersey8 web service framework and is deployed on

SAL’s web service container. As mentioned before the MongoDB database system is used as a stor-

age back-end for the registry. The registry provides interfaces for querying for the available regis-

tered web services using parameters like the name of the plant and service. The result, depending

on the call used, is either the endpoint or the full description of the service. Each entry inside the

registry contains the WADL or WSDL file and some metadata. These are the name of the plant that

belongs, the name of the service and the endpoint address. An alternative implementation of ser-

vice registry inside the SOAClient library has been developed, it is the InternalRegistryClient. This

implementation is internal to the library and thus to the agent framework. The internal registry is

ideal for fast deployments of the I2MSteel platform, for example in cases of demo or for testing,

where an independent registry of services with an SOA API for updating or adding content is not

required or is not so important. Task 6.3 describes the registry configuration method.

b. Security implementation

Depending on the deployment use case it can be crucial that the infrastructure is secured and that

it provides access control restrictions on the data services. Plant operators must control who is

accessing which chunk of data and define policies that authorize or restrict access to sensitive in-

formation, thus increase confidence in sharing and exchanging of data. The platform is setup to use

user/password (current configuration) or certificate based authentication. The Policy Enforcement

Point (PEP) that was implemented and deployed at the container filters all requests. It then calls

the Policy Decision Point (PDP), which evaluates the requested action on a resource by a user ac-

cording to the deployed XACML policy file; an example of XACML file has been provided in the Mid-

term report. If the policy permits the action, the user request will be let through to the service and

the client will receive a response from the service. If the policy does not permit the action then an

“Access denied” error message will be returned to the client. This policy based access control setup

is presented in the following diagram and has also been implemented in Grid-Trust FP6 project [6].

6 http://www.w3.org/TR/wsdl

7 http://www.w3.org/Submission/wadl/ 8 https://jersey.java.net

91

Figure 50 Security components interaction diagram

c. Data interoperability and semantics architecture and implementation

The I2MSteel architecture aims to enable plant-agnostic software components to operate at any

factory despite the syntactic and semantic heterogeneity of the different data models. The syntac-

tic heterogeneity issues are related to the way information is represented at each data source. The

semantic heterogeneity relates to the use of different terms or languages for referring to the same

concept. Software components in I2MSteel will be able to automatically resolve the both heteroge-

neity types by using semantic interoperability services.

Interoperability is achieved by linking the concepts/terms of the high-level (core) steel domain

ontology, which has been developed by WP2 to provide a common terminology for all participating

plants, and the plant specific ontology, which provide the terminology, description of methods and

model used in each plant. These links are implemented either by an inheritance relation between

entities or by explicitly defined interrelationships between entities of the high–level and plant on-

tologies, for example the Web Ontology Language (OWL) property owl:sameAs. We are also devel-

oping a 3rd level which maps the database schema with the plant ontology and effectively adds a

SPARQL (Query Language for RDF) interface for the database. This way the plant databases can

also be accessed and exposed as RDF (Resource Description Framework) graphs and effectively

queried the same way as ontologies.

The semantic mapping between the ontology layers creates associations of related concepts of the

domain ontology, the plant ontology and the actual data in the data stores. This approach allows

translation of the client query from the domain ontology, which all plants - participating in the pro-

ject platform - are able to understand and use, to the local one used in each of the plants, and

eventually – if requested – to the database native query language. This approach is generic enough

to be reused in other applications, for example a similar solution was applied in the clinical re-

search domain [7]. The reallocation use case, which involves multiple plants with an agent plat-

form each, benefits from this approach when agents of different plants i) request for product data

values, ii) query a plant’s model to identify manufacturing processes, logistical or other plant in-

formation.

The ontology service has been used, as presented in the following Figure 50 mainly for returning

the name of the requested term using the mapping service. The DB agent can use the term in its

transactions with the database.

92

Figure 51. Retrieving local term based on the I2MSteel ontology one

Several approaches were investigated for the implementation of the semantic mapping services.

From those we have tested the Virtuoso server, the Semantika9 server and the D2RQ server. An

additional complication and challenge was the nature of the databases we have used in the project

for the plant products and customer orders. These two datasets are stored in a NoSQL document-

oriented database the MongoDB, which dynamically adjusts its schema to the data that is stored in

each document (each entry in the database). The absence of concrete schema introduced prob-

lems to the mapping server of the ontology with the database, which was solved using the uni-

tyJDBC10 driver with the D2RQ server. The D2RQ server does not require validating the schema,

unlike Semantika, and the unityJDBC driver provides the core ODBC functionality and API. This

enabled the plant product and orders databases to be accessed as RDF Triplestores using a SPARQL

endpoint and be browse-able by both humans and machines.

The semantic agent can query this service using SPARQL queries to retrieve mappings between the

I2MSTeel ontology and the database schema. As these mappings are not changing very frequently

a mechanism has been implemented to serialise them into JSON format and cache them to a local

file in order to avoid having the semantic agent calling the ontology service every time there is a

need to access the database. With this caching of mappings the agents of the platform can be ag-

nostic of the schema of the plant database and only need to know the domain ontology terms for

the data they need to access. The format of the cached mapping is provided in the JSON code bel-

low:

9 http://obidea.com/semantika/

10 www.unityjdbc.com

93

[{"definition":"http://steel.eu/product.coil",

"label":"Coil", ... ,

"dataAttributes":[{

"definition":"http://steel.eu/product.coil_width",

"label":"Width", ...}],

"equivalentTo":{

"definition":"http://steelcompany.com/Coil",

"label":"Coil", ...,

"dataAttributes":[{"definition":"http://steelcompany.com/Width","label":"Width", ...}],

"equivalentTo":{

"definition": "https://steelcompany.com/d2rq/resource/PRODUCTS/Sagunto/PRODUCTS_Sagunto", ...,

"dataAttributes":[{

"definition" : "https://steelcompany.com/d2rq/resource/PRODUCTS/Sagunto/Width",

"label":"Width", ...}

]}

...}]

The ontology mapping offered by the semantic service can be also used to translate queries of the

I2MSteel domain to MongoDB and retrieve results directly from the database. This approach and

method of accessing the database will have performance issues from the ontology mapping and

querying services and should be used in exceptional cases. .

Task 4.2: Implementation of SOA based agents / holons communication mechanisms

In the previous sections we have described the requirements and architecture of SOA based agents

which act as clients for the web services of the Service Abstraction Layer. Two alternative imple-

mentations have been identified, the internal SOA capable agent, which is an agent that can invoke

the services and the External SOA based agent that acts as WS proxy between the D’ACCORD plat-

form and third party services. As defined in the requirements both types of agents can support the

Web service protocols SOAP and REST over HTTPS, the security protocols (authentication, access

control and message encryption) and be capable of local pre- and post-processing of data.

Because of the passive role of the internal SOA based agent and because of the fact that only a

certain type of agents require access to the SOA architecture, we have decided that it is unneces-

sary to use the overhead of the agent functionality at our SAL client implementation. Instead, for

the time being, only the library implementing the web services client is used by the DBAgents.

Implementation

The SAL client library of the agent platform is called DBService and belongs to a collection of SOA

utilities with the namespace SOAUtils. It is a web service client implementation that uses the

standard libraries of C# .Net framework and supports secure, encrypted connections. The access

credentials of the web services, the implementing class of the client and the endpoint are loaded

from a configuration file, the clientproperties.json.

There are several database service clients available for the agents, one for each type of service and

for the type of data it returns. For example the MongoDBClient is used for accessing the data ser-

94

vice of MongoDB. The DBService library can dynamically instantiate the appropriate service client,

using the Factory pattern and invoke web service calls. The DBClientFactory uses the properties

file clientproperties.json to identify the DBClient implementation that can be used to access the

service based on the endpoint URL. The DBService uses the registry client, which contacts the

service registry described in Task 4.1 to discover and retrieve the service description document.

The service description document contains the endpoint and other information about the service.

Discovery is performed using keywords, which are the Plant name to which this service belongs,

the service name and other details of the service such as the database name and the DataSet.

The interface of the library is defined and described in IDBService.cs:

public interface IDBService {

/* Connects to the database.

Uses the registry to discover the appropriate data service to use based on a search of available

plants and DBs.

The dataset name and the name of the DB to query must be provided as parameters.

Agents can use te regisrty to discover the following param values.

*/

void connect(String plantName, String dbName, String setName);

/* Query the database method. Uses filters and fields. Retruns serialized json*/

String query(List<String> fields, List<Filter> filters);

//Retruns deserialized json, in a DataSet object.

DataSet queryInDataSet(List<String> fields, List<Filter> filters);

//Sets a query string instead of filters. This overrides filters.

void setQueryString(String query);

//Returns the registry client that the DBSErvice instance is using.

RegistryClient getRegistryClient();

}

Currently the data received from the web services can be de-serialised by the DBService library but

the post processing is performed by the DBAgent, which is the agent responsible for using the li-

brary.

Task 4.3: Testbed setup, software testing and migration paths

Level 2 and 3 simulation environment components

The Testbed deployment of the Service Abstraction Layer platform has the ORDERS data collection

deployed with data from ArcelorMittal on the MongoDB database system. For the PRODUCTS collec-

tion we have used data from Sagunto and Fos-Sur-Mer plants and deployed them both on the

Aerospike and MongoDB systems. These data represent a selection of Level 2 and Level 3 produc-

tion data of ArcelorMittal’s Order Management system and plant systems and was delivered to us

95

for the purpose of building a virtual simulation environment in order to perform integration and

testing of the I2MSteel components and the overall infrastructure.

A web service for each of the collections has been deployed on the CETIC cluster on HTTPS (with

bidirectional message encryption). The Web Service follows the representational state transfer

(REST) architecture style, which is optimized for performance, scalability, reliability and simplicity

of interfaces. The service container that was used for the development of this prototype and for

this experiment is the Jetty11 container. The Jersey12 web-service framework was used for the de-

velopment of the data service. The databases are deployed and available to authorized users at the

following URL (query parameters are also required):

• https://i2msteel-webservices.cetic.be/AgentStorage/api/ORDERS/Sagunto?query={}

• https://i2msteel-webservices.cetic.be/AgentStorage/api/PRODUCTS/Sagunto?query={}

• https://i2msteel-webservices.cetic.be/aerospike/client/PRODCUTS/FosSurMer/…

Triplestores and ontology development infrastructure

As part of the Test-bed infrastructure setup, this task has deployed a triplestore for the activities of

Tasks 2.4, 2.5, 2.6 of Workpackage 2 as well as for testing the ontology based services that are

being developed in Task 4.1. This work included a detailed analysis of several triplestore technolo-

gies that are supported by the Semantic wiki - a Mediawiki13 based web application for ontology

development, querying and browsing, which was chosen by task 2.7 for the ontology development.

The triplestore investigation considered mainly tools that are compatible with the Semantic wiki,

which are Joseki (and its evolution, Fuseki), 4Store14 and Virtuoso15 triplestore implementations.

For performance reasons the most lightweight implementations were favoured and because of

technical problems with the Joseki implementation we finally decided to use the 4store. This task

deployed both the Semantic wiki application and the 4store and populated the triplestore with the

current version of the I2MSteel ontology. The Semantic wiki that is used for ontology development

and its triplestore are deployed on a server at CETIC’s cluster and can be accessed using the fol-

lowing URL: http://i2msteel-webservices.cetic.be/mediawiki

Task 4.4: Software packaging and configuration

The I2MSTeel platform is packaged into two bundles. One contains D’ACCORD and the implementa-

tion of the agents and the other contains the packages for installing the Service Abstraction Layer.

The first one is a Zip file that includes:

• The D’ACCORD executable.

• The DLL libraries for the Agents.

• The SOA client library.

• Libraries with utility classes we have implemented.

11 http://eclipse.org/jetty/ 12 https://jersey.java.net 13 http://semantic-mediawiki.org 14 http://4store.org 15 http://virtuoso.openlinksw.com

96

• README files for deploying the platform and the test use cases.

The second is again a Zip archive that includes:

• MongoDB database package.

• The web service container (Tomcat or Jetty).

• The war for the Registry and Persistence services.

• The war for the security services.

• The CSV files with the data sets for deploying the test environments.

• README files for deploying the services.

The two README files contain detailed instructions for installing, configuring and setting up the use

cases. These files are provided with the software deliverables of the SAL and the Agent platform.

The hardware and software requirements of the above packages are:

• Server host machine minimum hardware requirements:

o 2GHz processor preferable multicore, 4GB ram, 64bit architecture, 20GB free disk

space

o USB port for supporting a dongle

o

• Software requirements for Agent Platform (D'ACCORD):

o .NET 4.5 Framework

Requires Windows Vista or newer.

o Administrator rights for execution

o May require firewall setting adjustments

• Software requirements for SAL platform:

o MongoDB (persistence service and registry)

o Web service container (Jetty 8 or Tomcat)

for Jetty 8 and Tomcat

• Requires Java 7

o Fuseki or 4store triplestores

4Store

• Linux distribution is required

Fuseki

• Requires Java 7

97

2.2.5 Work package 5: System integration, implementation of demonstration example,

simulation, adaptation

Task 5.1: Integration of final system

This task is dedicated to the installation and setup of developed components on the AM IT infra-

structure. In order to accomplish this task the developed solutions has been installed on a virtual

environment of AM Cental-IT in Fos-Sur-Mer.

Going more in detail that means, that a dedicated server on basis of Microsoft Windows 2008 has

been setup which runs in a virtual machine inside a virtual server architecture. This machine is the

host for the Agent platform and the SAL.

The both components have been successfully installed on the mentioned server and the first func-

tionality tests have been performed.

Concerning the Ontology environment, for the first application scenario it has been decided to keep

the installation at the infrastructure of CETIC due to the reason that the installation has been done

on Linux based architecture. Anyway the ontologies are accessible via Internet and can be used

inside the project without any restrictions.

Although the first test users will be engineers at AM Sagunto plant the installation of server com-

ponents has been done on the IT environment of Fos-Sur-Mer where the central IT of AM is located.

It is intended that the final users will use a remote user interface for the communication with the

agent platform. That means that by doing so the server applications, as agent platform and SAL

can remain in Fos-Sur-Mer while the client application will run on the local computers of final users.

In the current solution the user interfaces are incorporated in the agent platform but the design of

the interfaces has been developed in modular way, so that the same implementations can be easily

extracted and reused in the client applications. The first tests on this issue have been already suc-

cessfully done, so that the functionality can be guaranteed. The work for the outsourcing of the

user interface will be performed in WP6. In the mean while the solution can be used via remote

desktop connection.

Task 5.2: Definition of inputs, exchange protocols and way of agent activation

The aim of this task is to ensure the correct semantics of the communication between local sys-

tems of AM and the solution provided by I2MSteel project. One of the major problems in the inte-

gration of new systems in the existing infrastructures is always related to the meaning and nomen-

clature of data and their interfaces. This is especially true when different plants are involved which

may be located in different European countries using different languages and also different rules

for the creation of terms. This is in fact the case in the I2MSteel project.

The solution for this problem is the usage of term mappings between local terms and terms used

within I2MSteel solution. That means the developers in I2MSteel developer team can internally use

their own terms which are then automatically translated during the runtime to the local terms used

at AM. This approach gives the necessary flexibility for the adaptation when changes need to be

done at plant side and is also obliged for the transferability of the solution to other plants.

98

In order to generate this automatic translation we use the semantic agent service, which generates

automatically a mapping object based on the definitions made within the ontology and managed by

the Semantic Media Wiki. This mapping object provides the translation of the terms between the

internal nomenclature and external one which is used within the use cases in order to access the

data.

Task 5.3: Setup of connections to databases and servers

The aim of this task was to establish the data supply from the production data sources to agent

platform. In order to prevent perturbations on the running production systems it has been decided

that for the first evaluation tests the agent platform will be not connected directly to the production

systems. Instead additional databases have been installed where the original data required for the

execution of the reallocation use case is regularly transferred. This compromise allows performing

the evaluation test of the systems under real conditions and avoids the risk of production systems

disturbances. The databases have been installed on the same server as the platform itself and have

been integrated in the Service Abstraction Layer (SAL). The agent platform accesses these data-

bases through the client side components of the SAL. In this task the deployment and integration

of the data services, the web service registry and the client side components has been performed.

Task 5.4: Simulation and adaptation

This task is devoted to the execution of preliminary system tests in order to ensure the reliable

operation of the solution. The work is performed by the developer team themselves under real

conditions using the real data and the same IT infrastructure which will be used also by final users.

In our case that means that simulation study has been performed at the facilities of AM initially in

plant of FosSurMer where the installation of the agent system is physically located and afterward in

plant of Sagunto where the users are reside. The test scenario comprised the operational tests as

well as functional.

Operational tests are focused on the verification of integration activities to the local environment

such as connection and setup of the databases, network configuration, rights management etc.

while the functional tests are focused more on the correctness of calculation and the reliability of

the results. In both cases the made findings have been utilised for removal of software bugs and

for the optimisation of the operational stability and correctness.

Further performance measures, in the meaning of execution velocity for different examples have

been performed and potential for improvements has been identified. With the view to the agent

technology the most potential for the performance improvement is reduction of the communication

overhead between agents. Doing so we partially redesign the format of the message structures

from JSON strings to binary base64 format which improves from one side the performance of the

data serialisation and additionally reduce the problems with number conversions which may occur

when one deserialise JSON strings back to binary data tables.

In general the idea of this task was to simulate the operation of the solution under real conditions

in order to ensure the usability of the system for the final users.

99

2.2.6 Work package 6: Test Phase, Evaluation of test results and investigation

of transferability

The main objective of this work package is to evaluate the effectiveness of the I2MSTEEL develop-

ments and to benchmark their performance against previous, traditional operations and strategies.

Task 6.1: Validation Specifications

Within this task the evaluation criteria’s defined in Task 1.6 are refined and adopted in accordance

to the executed evaluation scenario. In our case the primary evaluation scenario is the reallocation

use case. Due to the fact that the “alternative thickness” use case has been implemented only as

laboratory prototype using real plant data but without integration inside the plant IT the evaluation

of the “alternative thickness” use case can be performed only with limited perspective. Especially

business indicators can only be analysed based on the data used for the implementation.

In general according to the analysis made within the Task 1.6 we differentiate between IT indica-

tors and Business indicators. IT indicators refer to performance measurements from technological

point of view (speed, availability, resource efficiency etc.) while the Business indicators are focused

on perspective of the practical benefits of the solution in comparison to the traditional approach.

Within this task the experts of AM and SIE evaluated initially the indicators defined within Task 1.6

with the reference to the current project development status. Afterward we investigated which

additional data need to be considered for the calculation of the indicators and how this can be rec-

orded or created. Finally we defined for each evaluation criteria the strategy for the result genera-

tion. Thus strategies comprise software adjustments which are necessary for execution of evalua-

tion criteria’s analysis and measurement. For example in order to analyse the calculation/response

time special performance measurement routines has been included within the software sources.

More detailed overview about the criteria’s and results evaluation is given within the Task 6.5 de-

scription.

Task 6.2: Demonstration example execution

As explained before the demonstration phase has been focused on the reallocation use case execu-

tion and in particular inside the Sagunto ArcelorMittal plant identified as the demo site for testing

the system.

Specific data coming from the plant have been merged with the ones provided by the central or-

ders Database testing in this way the integration of I2Msteel application both with the Division

North and Sagunto legacy DBs.

The developing group have done a preliminary integrated test followed by an extensive use of the

system by people from the Sagunto plant in charge of reallocation.

All the possibilities to reallocate a product as illustrated in Figure 51 and in Table 2 have been test-

ed in order to highlight the possible improvements and/or fixes in the system developed.

As already described in the Task 3.2 for the reallocation scenario we differentiate between “as-is”

reallocation and reallocation with “reworking”.

100

“As-is” reallocation means that the system looks for alternative order for not allocated product

without any additional treatments. Instead for the reallocation with “reworking” the system search

for an order which might be achieved when additional treatment steps such as pickling, cold rolling

(CR), annealing, electro galvanazing (EGL) or hot deep galvanizing (HDH) are performed. These

additional steps allow also to modify partially the final product characteristics like yield strength

(YS) and ultimate tensile strength (UTS) which are relevant for customers.

Figure 52 Sagunto process routes

ORDER

PRODUCT HR HR CR CR HD EG

Category Subcategory BL PI FH CR

HR BL Y Y Y

HR PI Y Y Y

CR FH Y Y Y

CR CR Y

HD Y

101

EG Y

Table 2 Possible product reallocation in Sagunto plant (HR – hot rolled, CR – cold rolled, HD – hot

deep galvanized, EG – electro galvanized)

The output of this activities was used in task 6.3 to finalize the final tuning of the system.

Task 6.3: Final tuning of the system

This task was devoted partially to the tuning of I2MSteel system functionalities but mainly for the

adaptation of the user interface. In the first prototype of the I2MSystem the user interface was

realised inside the Agent platform. This was useful for the development and testing activities but

for the practical usage another solution was needed.

According to the requirements the users might be distributed over different plants and locations, so

a Client/Server based solution is necessary. In doing so we extend the function of FailedProduct

and Order agents to support web service interfaces. That means the methods of the both agent can

be accessed also by external applications.

Applying this concept we developed an external application which implements the GUI for the users

using the web service protocols (HTTP, SOAP).

Figure 53, I2MSteel user interface (Product selection)

Figure 52 show a screenshot of the GUI application. In the presented screen a list with not allocat-

ed products is shown. Here the user can select a product, which is then is presented on the right

side of screen showing the product details in compact format.

102

After pressing the “Reallocate” button the system triggers the Agent platform to find a possible

orders for this product.

Figure 54, I2MSteel user interface (Results/Orders evaluation)

Figure 53 shows an example how the results of the search are presented inside the GUI. On the left

side of the screen the details of the selected product are shown. In the middle upper part a list with

the complying orders is presented while the middle lower part shows a chart which visualises all

found orders along the axes “Total value” and “PSE”.

“Total value” is monetary value of the concerning product for the case when it will be assigned to

the given order, it is equal to: Price of the order – (Trimming costs + Transport costs). PSE repre-

sents the fuzzy proximity value of how exact the product matches the order. Values below 1 indi-

cate that there is a higher risk, that the product will be not accepted by the customer.

When the user selects the orders on the list or inside the chart then the details of the selected or-

ders are presented on right side of the screen. The upper part shows the details of the orders,

while the lower part shows graphically the costs (here Trimming and Transport), the price and PSE

value by a bar chart.

Applying this GUI the final users are able to analyse which product can be reallocated on which

orders.

Task 6.4: Investigation of transferability

According to the Task 1.1 is Transferability besides Integration and Extendibility one of the major

project objectives. This is archived inside the project by the combination of three pillars; Ontolo-

gies, SOA and Agents system. Ontologies deliver a unified description of data sources, their mean-

103

ing and their relation to the processes, products and machinery. They allow to analyse the configu-

ration of the environment and to find necessary data.

SOA helps to build an abstractive layer in order to access the required data or to execute special-

ised algorithms. And finally the Agent system provides a flexible architecture to implement extend-

ible solutions.

Combining these three systems we archive a general approach for development of transferable

solutions. Using the agent system we can develop a universal implementation concept which is

independent of specific configuration. The configuration is part of the oncological model which al-

lows agents to find the way how to access the necessary data automatically. And last but not least

if specialised implementation of data accessing or execution of specific algorithms is required, this

can be realised within the SOA. This conceptual design focuses on the transferability as one of the

main features.

Referring to the realised evaluation scenarios we followed our design concept and implement the

solutions independent from the targeted plant. The plant specific components have been integrated

inside the SOA architecture so that if the solution need to be transferred to another plant only the

specific parts of SOA need to be adapted or a new configuration inside the ontology have to be

defined allowing utilisation of standard implementation (e.g. standard database connection). The

ontology itself provides a description of the plant and process configuration. In case of a transfer to

a new plant the configuration of the semantic plant description need to be adapted to the new

situation.

Besides the general transferability of the solution we investigate also if the reallocation use case

can be extended to additional process steps. Reallocation use case is configured to find galvanized

order for hot rolled or cold rolled coils. However it can be extended to find orders also from color

coating line.

The same algorithm can be also used for reallocation:

- Slab onto cold coil order

- Billets onto bar rod order or wire rod order

Those two examples may be especially interesting for the owners of integrated steel plants – Mini

Mill or Continuous Caster and Hot Strip Mill plant.

Design of standard hot strip mill or bar/wire rod mill production line contains a furnace where raw

material is heated before rolling. Architecture of the integrated steel plant allows during production

eliminate operation of reheating slab/billet. It is possible because slab/billet is still hot after cast-

ing. Fast reallocation operation will help find matching order to maintain stable production and

keep low energy cost.

Task 6.5: Evaluation of results with respect to technical and economic benefits

Referring to the description in Task 1.1 the benefits of the solution are calculated based on IT and

business indicators.

104

IT indicators:

KPI Use Case:

Alternative thickness

Use Case:

Reallocation

Response time to find a

solution

The application have been de-

veloped in order to be able to

give an answer in the available

time response.

If only the historical data are

accessed the response time ob-

tained is around 30ms.

In the test environment the fol-

lowing result have been achieved

accessing Order DB Data while in

industrial context they can im-

prove. When accessing the Order

DB (if available) a typical data

request per job has a duration of

300ms and generates data vol-

ume up to 2MB.

Only the accuracy of the answer

depends on the available time.

The total time highly depends on

the amount of preselected orders.

In general execution time to ana-

lyze one order on a i7 (3770) CPU

with 3,4GHz is about 18ms. The

amount of preselected orders vary

for a product from 500 to 5000

resulting in 9s up to 90s for one

reallocation task. The current im-

plementation is realized on a single

thread basis. Using parallel pro-

cessing the speed can be improved

by several times depending on

available processor cores.

Availability of the appli-

cation

Generally the agent platform as well as the SOA services was running

stable without any breakdowns. Due to the continuous development

the uninterrupted execution of system was for the longest period about

3 weeks. From the practical point of view this time period is already

sufficient to say that the system is stable for industrial usage for this

type of tasks.

Total availability of the

application in percentage

Mean time between fail-

ures

Resource efficiency:

CPU usage

No intensive calculations are

foreseen for this use case and

therefore no particular attention

must be taken in the PC where

the application should run.

Due to intensive calculation process

which can fully use the capacity of

one processor core for more than

one minute, it is highly recom-

mended to run the system on dedi-

cated computer or at least on a

computer without time critical

tasks.

105

Memory usage

No special requirements on memory exist for both use cases. A config-

uration of a modern PC (2GB RAM) is sufficient to execute the applica-

tion.

Network load

The network load can be consid-

ered as minimal. A typical data

request per Order DB access job

has a duration of 300ms and

generate data volume up to 2MB

The network load can be considered

as minimal. A typical data request

per job has a duration of 300ms

and generate data volume up to

2MB

Impact on external

systems (databases,

MES, ERP)

L2 Automation should start the

use case and order DB should be

queried at run time.

In order to minimize the load and

possible disturbance on external

system we used a local database

where the necessary plant data has

been submitted on a daily basis.

Due to this reason the system has

no impact on external systems.

Table 3 : It indicators for the evaluation of the platform

Business indicators:

Use Case Alternative Thickness: Due to the reason that the alternative thickness use case has been

only implemented as laboratory prototype that means the functionality of the system has been

investigated in laboratory environment using in fact the real plant data but without a test phase on

a plant. The business indicators can only be estimated based on a general assumption:

A study of the production in Dunkirk Plant indicates that around 250 coils which represent 6000

tons are concerned by this unpredicted event. Among these coils, 70% will be re-allocated on

prime orders; the others are downgraded or scrapped which represents 1800 tons per year.

The downgrading costs are around 100€/t for non-prime and 300€/t for scrapped, the stakes of the

alternative thickness for Dunkirk Plant is so around 240k€ per year. A laboratory study using

I2MSteel tool made on the 2012/2013 production of Dunkirk plant shows that 90% of this tonnage

could be re-allocated on prime order using the alternative thickness approach. This would lead to

216k€ savings per year for Dunkirk plant. Scaling up the solution to AM production in Europe a

potential of 1M€ can be reached.

Use Case Reallocation:

During the period from 1st of October to end of November, products from Sagunto have been sent

to I2MSteel platform in order to find some re-allocation proposals. Every day a set of one to two

hundreds of products and a set of more than one thousands of orders are uploaded in the platform.

On two months, Twelve hundreds of finished products which represents 20 000 tonnes were ana-

lysed.

106

Category Number Tonnage Re-allocation(t) Re-allocation %

Hot Rolled 37 818 634 78

Full Hard 179 3527 3312 94

Hot Dip Galvanised 332 5377 4289 80

Cold rolled 525 8843 5762 65

Electro Galvanised 187 3844 859 22

Table 4 Reallocation proposed by I2MSteel within the plant Sagunto in Q4 2015

A comparison has been done with the current allocation done by the team of Sagunto indicates an

increase of propositions of current re-allocation:

Category Number Tonnage Improvement %

Hot Rolled 9 215 51

Full Hard 43 850 35

Hot Dip Galvanised 68 1280 43

Cold rolled 94 730 15

Electro Galvanised 0 0 0

Table 5 Additional re-allocation provided by I2MSteel compared to the current re-allocation

A potential improvement of 25 % of the re-allocation could be achieved using I2MSteel platform

which represents a potential benefit of 500 k€ for 2 months so 3 M€ per year. Accordingly earnings

of 10M€/year can be expected when considering AM European wide production.

Additionally, the workload spent by expert could be reduced by 30% based on the fact the interface

of I2MSteel facilitates the selection of best propositions.

107

2.3 Conclusions, indicating the achievements made.

I2MSteel project has given the possibility to put in place an efficient paradigm composed of three

pillars and dedicated to steel manufacturing issues. Despite the hive off of Siemens from its busi-

ness division dedicated to metal sector, the project was slightly impacted by this event as the team

in charge of integration was kept inside the new joint venture Primetals hold by Mitsubishi. The

work on development of semantic services and choice of the agent platform was already done. The

pillar 1, the service oriented architecture provides a general framework and all necessary services

to communicate with legacy system of a plant at different levels. Pillar 2, semantic model deliver a

description of data sources providing the information about their relation to machinery, sensors

and processes. Pillar 3, the agent technology provides a universal concept allowing generating flex-

ible software solutions for various types of IT problems.

The combination of these three technologies enables a paradigm shift in terms of development of

IT solutions for industrial applications. It provides an approach which solves the three main chal-

lenges of IT developments particularly within the brown field scenarios.

Challenge 1 “Integration”: due to heterogeneous structure of IT landscapes which can be found

at brown field plants the integration of new IT solutions is very cumbersome especially when the

new system has to communicate with existing systems. The solution for this problem provided

within the I2MSteel project is based on the one hand on the SOA architecture, allowing connecting

different systems through a standardised communication layer and on the other hand on external

Agent concept, which gives the possibility to integrate the legacy systems running on all possible

platforms and using specific interfaces.

Challenge 2 “Extendibility”: in most cases the available IT solutions are closed systems that

mean they provide a specific solution for a specific problem. If a new solution for the same problem

is available typically it is much easier to develop a new system rather to adapt the existing. The

approach provided by this project allows integrating new solutions within the I2MSteel platform

with minimum on effort. One just needs to implement a new agent which serves the new solution.

A reuse of the already available agents and services is possible so that only the extensions have to

be developed. Further applying the virtual market strategies it is even possible to utilise both solu-

tions, the old and the new one, at the same time. In doing so the market algorithm decides which

solution is the best for the given situation.

Challenge 3 “Transferability”: as already mentioned many IT system which can be found on

brownfield plants are developed just for the given usage scenario. To transfer such solution to an-

other plants require typically lot of adaptations and are very difficult or even impossible. Within the

I2Steel approach we utilise the Semantic model in order to describe the available data sources and

they relation to the machinery, process and sensors. Applying this semantic model agents don’t

need any more to know the required data sources at the development time, they ask the semantic

services for the data they need, while the semantic service knows the configuration of the plant

and deliver the right data connections. In doing so the transferability task becomes very easy, one

just need to adapt the semantic description of the plant.

From the practical perspective the solution developed within the I2MSteel project shows a good

performance proving that an agent platform can be utilised for industrial applications. The mainte-

108

nance of the solution is quite easy due to its modular design. Also the costs of development are in

general competitive compared to standard one. In case of extensions development even decrease

of the cost can be expected due to improved extendibility capability.

The re-allocation use case was a good opportunity to apply this new concept to an industrial appli-

cation. Anyway its perimeter could be wider with more interacting services operating at all levels,

but due to the limited project budget only a more compact solution was possible to develop.

Nevertheless, we have implemented an operational solution for re-allocations that can easily be

deployed in other plants, verifying the improved transferability capability. All of these results give

us a good confidence on this new approach, but it will require time for IT departments and espe-

cially within plants to integrate such a concept due to its disruptive innovation.

109

2.4 Exploitation and impact of the research results

Actual applications:

The platform is tested for re-allocation at Sagunto plant for finished products and for pickled coils.

Each day the MES sends to I2MSteel platform the current stock and the orders with the open ton-

nage for re-allocation. An extension to the BD South West (Asturias and Fos-Sur-Mer) is under

study. Other use cases cited in the document could take advantage of the I2MSteel platform.

Technical and economic potential for the use of the results:

I2MSteel platform provide a general framework to IT which can address large number of issues

concerning the integration of new services in a legacy system. The value creation depends on the

resultant benefit given by the new service. For the re-allocation use case a potential of 10M€/year

has been evaluated for the European perimeter. But other applications whose implementation is

facilitated by I2MSteel could generate additional benefit.

Application for patents:

Dr. Alexander Ebel, Dr. Sonja Zillner, Martin Schneider: Method and system for providing data ana-

lytics results, Patent application number: 15 183 115.6

Publications:

S. ZILLNER, A. EBEL, M. SCHNEIDER, L. ABELE, G. MATHIS, N. GOLDENBERG, A Semantic Model-

ing Approach for the Steel Production Domain, 1st European Steel Technology & Application Days &

31st Journées Sidérurgiques Internationales, 7-8 April 2014, Paris

M. NEUER, A. EBEL, A. WOLFF, F. MARCHIORI, N.MATSKANIS, M. RÖSSIGER, G. MATHIS. Raising

economic efficiency of steel products by a smart re-allocation respecting different process routes

2nd ESTAD 2015 15 - 19 June 2015, Düsseldorf, Germany

N. Matskanis, S. Mouton, A. Ebel, F. Marchiori. Using Semantic Technologies for More Intelligent

Steel Manufacturing. KEOD 2015 7th International Conference on Knowledge Engineering and On-

tology Development 12 - 14 November 2015 / Portugal, Lisbon

M. NEUER, A. EBEL, A. WOLFF, F. MARCHIORI, N.MATSKANIS, M. RÖSSIGER, G. MATHIS. Dyna-

misches Umplanen von Stahlprodukten (Smart reallocation of steel products), StahlEisen Journal,

November 2015

G. Mathis, A. Ebel, M. J. Neuer, N. Matskanis, M. Rössiger, A. Papiez, S. Zillner, F. Marchiori, L.

Piedimonte, A. Wolff, C. Pietrosanti, N. Goldenberg, R. Pevesthof, S. Mouton, Development of a

new automation and information paradigm for integrated intelligent manufacturing in steel industry

based on holonic agents. Workshop on EU Funded Steel Projects 2 February 2016 Brussels

110

M.J. Neuer, Agenten am virtuellen Marktplatz zur dynamischen Umplannung von Stahlprodukten,

(Agent based virtual market for dynamical reallocation of steel products), Workshop: Industrie 4.0

in der Stahlindustrie, Stahl Akademie, 3 February 2016

S. Zillner, A. Ebel, M. Schneider, Towards intelligent manufacturing, semantic modelling for the

steel industry. Submitted to IFAC MMM2016, 17th IFAC Symposium on Control, Optimization and

Automation in Mining, Mineral and Metal Processing, September 2016, Vienna Austria

M.J. Neuer, F. Marchiori, A. Ebel, N. Matskanis, L. Piedimonti, A. Wolff, G. Mathis, Dynamic reallo-

cation and rescheduling of steel products using agents with strategical anticipation and virtual mar-

ketstructures, Submitted to IFAC MMM2016, 17th IFAC Symposium on Control, Optimization and

Automation in Mining, Mineral and Metal Processing, September 2016, Vienna Austria

111

3 List of figures

Figure 1. Florange Plant - IT Architecture .............................................................................. 20

Figure 2. Automation pyramid .............................................................................................. 21

Figure 3. Principle of the Proximity Search Engine (PSE) ......................................................... 23

Figure 4. Dimensions of product properties ............................................................................ 24

Figure 5. Organisation of business resources ......................................................................... 25

Figure 6. I2MSteel versioning concept for software development. ............................................. 33

Figure 7. V-Model of software testing .................................................................................... 33

Figure 8. Global architecture of I2MSteel system .................................................................... 36

Figure 9. Communication mechanisms of a simple example of a mail notification service. ............ 37

Figure 10 Three different expert roles ................................................................................... 38

Figure 11 Modeling levels of the concept model in correspondence to the MOF modeling levels as

defined in (OMG, 2004) ....................................................................................................... 39

Figure 12 Concept model with all levels and models ................................................................ 40

Figure 13 Overview of the six foundational models of the steel manufacturing domain ................ 41

Figure 14 Simplified Overview of the Structure Model ............................................................. 42

Figure 15 Simplified overview of the Measurement Model ........................................................ 43

Figure 16 Overview of the structural component of a hot rolling mill ......................................... 45

Figure 17 Overview about the structure of the hot rolling mill .................................................. 46

Figure 18 Example for types and instances ............................................................................ 47

Figure 19 Example for Product, Measurements and Storage .................................................... 48

Figure 20 Exemplary sensors in the hot rolling mill ................................................................. 49

Figure 21 Relations in the finishing mill instance .................................................................... 50

Figure 22 Structure Query for a Four-High Stand .................................................................... 50

Figure 23 Start-page of the I2MSteel Semantic Mediawiki....................................................... 51

Figure 24 Example for generation of Hardware Component Instance Finishing Mill ..................... 52

Figure 25 Mapping of concepts of the Concept Model to SMW concepts ..................................... 53

Figure 26. Example of page design ....................................................................................... 54

Figure 27. Integration of ontologies ...................................................................................... 55

Figure 28. Screenshot of semantic agent implementation ........................................................ 55

Figure 29: An example for a Dunkirk day production schedule. ................................................ 58

112

Figure 30. Registration information of an example agent. ........................................................ 62

Figure 31. Sequence diagram for the registration of agents with the broker platform. ................. 63

Figure 32. Example of a service request to the broker, here commited by the Level-2 system. ..... 63

Figure 33. Sequence diagram of the agent interplay for use case Alternative Thickness. .............. 64

Figure 34. Histogram of thicknesses rolled with a width of 1343cm and a specific steel type (7675)

with respect to two months. The steel type designation follows the common practise of the partner

AM. Thicknesses below 2.7mm were no longer achievable. ...................................................... 65

Figure 35. MES based reallocation with a future order in a reconstruction of historic data from the

demonstration plant in Dunkirk. The x-axis represents the time and the y-axis represents the

thickness h. The width of each bar represents the total duration needed for the rolling process. ... 65

Figure 36. Mindmap of the agents/holons required for use case “Alternative Thickness”. ............. 66

Figure 37. Transfer of a misrolled coil into the reallocation process. .......................................... 67

Figure 38. Mindmap of agents/holons for the use case “Reallocation”. ....................................... 68

Figure 39. Registration information of an example agent. ........................................................ 70

Figure 40. Sequence diagram for the registration of agents and services with the platform and

service utilization. .............................................................................................................. 70

Figure 41. Sequence diagram of the agent interplay for the use case Reallocation as-is. ............. 71

Figure 42. Sequence diagram of the agent interplay for evaluation of proximity for reworking

products. ........................................................................................................................... 73

Figure 43. User interface of FailedProductAgent ..................................................................... 80

Figure 44. User interface of OrderAgent ................................................................................ 81

Figure 45 SOA and Agent Platform layers architecture diagram ................................................ 87

Figure 46 Deployment diagram of the SOA I2MSteel Services and the Agent platform ................. 87

Figure 47 Aerospike database performance test results ........................................................... 89

Figure 48 MongoDB performance test results. ........................................................................ 89

Figure 49 Security components interaction diagram ................................................................ 91

Figure 50 Retrieving local term based on the I2MSteel ontology one ......................................... 92

Figure 51 Sagunto process routes ...................................................................................... 100

Figure 52, I2MSteel user interface (Product selection) ........................................................... 101

Figure 53, I2MSteel user interface (Results/Orders evaluation) .............................................. 102

113

4 List of tables

Table 1. Feature comparison ................................................................................................ 31

Table 2 Possible product reallocation in Sagunto plant (HR – hot rolled, CR – cold rolled, HD – hot

deep galvanized, EG – electro galvanized) ........................................................................... 101

114

115

5 List of acronyms and abbreviations:

Acronym Definition

AHDL Agent and Holon Definition Language

AM ArcelorMittal

API Application Programming Interface

BLOB Binary Large Object

CLI Call Level Interface

CSS Cascading Style Sheets

CSV Comma-Separated Values

CVS Concurrent Versions System

DAI Distributed Artificial Intelligence

DB Database

DVD Digital Versatile Disc

ERP Enterprise-Resource-Planning

ESIDEL European Steel Industry Data Exchange Language

FIPA Foundation for Intelligent Physical Agents

GUI Graphical User Interface

HSM Hot Strip Mill

HTML Hypertext Markup Language

HTTP HyperText Transfer Protocol

HTTPS HyperText Transfer Protocol Secure

ID Identifier

IDE Integrated Development Environment

ISA International Society of Automation

ISO International Organization for Standardization

JADE Java Agent Development Framework

JIAC Java-based Intelligent Agent Componentware

116

Acronym Definition

JSON JavaScript Object Notation

MAS Multi Agent System

MES Manufacturing Execution System

MOF Meta Object Facility

MS MicroSoft

MTS Manager of technical service

OMG Object Management Group

OWL Web Ontology Language

PDF Portable Document Format

PDP Policy Decision Point

PEP Policy Enforcement Point

PLSQL Procedural Language/Structured Query Language

PO Product Owner

PSE Proximity Search Engine

PSL Process Specification Language

RDF Resource Description Framework

REST Representational State Transfer

RTC Customer Technical Relation

RUP Rational Unified Process

SAL Service Abstraction Layer

SM Scrum Master

SMW Semantic MediaWiki

SOA Service Oriented Architecture

SOAP Simple Object Access Protocol

SPARQL SPARQL Protocol And RDF Query Language

TFS Team Foundation Server

117

Acronym Definition

UC Use Case

URL Uniform Resource Locator

WP Work Package

WS WebService

WSDL Web Services Description Language

XACML eXtensible Access Control Markup Language

XSD XML Schema Definition

118

119

7 List of References

1 http://en.wikipedia.org/wiki/Basic_access_authentication

2 Service Oriented Architecture, http://www.opengroup.org/soa/source-

book/soa/soa.htm#soa_definition

3 Representational state transfer,

http://en.wikipedia.org/wiki/Representational_state_transfer

4 Simple Object Access Protocol, http://www.w3.org/TR/soap/

5 Hypertext Transfer Protocol (Secure), http://en.wikipedia.org/wiki/HTTP_Secure

6 Syed Naqvi, Philippe Massonet, Benjamin Aziz, Alvaro Arenas, Fabio Martinelli, Paolo Mori,

Lorenzo Blasi, Giovanni Cortese, Fine-Grained Continuous Usage Control of Service Based

Grids – The GridTrust Approach, ServiceWave 2008

7 Efthymios Chondrogiannis, Nikolaos Matskanis, Joseph Roumier, Philippe Massonet, Vassiliki

Andronikou, Enabling semantic interlinking of medical data sources and EHRs for clinical re-

search purposes, eChallenges 2011

8 Object Management Group: MOF (Meta Object Facility) Core Specification 1.4. No.04, 2004.

9 M. Kroetzsch, D. Vrandecic, and M. Voelkel: Semantic MediaWiki. in International Semantic

Web Conference (ISWC06). Springer, 2006, pp.935-942.

10 ArcelorMittal: Warmbanderzeugung.pdf. http://www.arcelormittal-

ehst.com/ameh/uploads/file/Warmbanderzeugung.pdf. [Accessed 12 June 2013]

11 "Popularity ranking of database management systems". db-engines.com. Solid IT. Retrieved 4

February 2014.

12 Document-oriented databases, see http://en.wikipedia.org/wiki/Document-oriented_database

13 NoSQL databases, see http://en.wikipedia.org/wiki/NoSQL

14 Object Management Group: MOF (Meta Object Facility) Core Specification 1.4, No.04, 2004

120

121

10 Appendixes

[1] Use Case details

[2] Requirements

[3] Agent platform selection

[4] Details of the software architecture

[5] Agent and Holon Definition Language (AHDL)

[6] Report on related Industrial Standards and Approaches

[7] Scanned copy of the signed Technical Annex

122

123

Appendix 1

Use Case details

124

Template for use case definition

Use Case < number> < the name is the goal as a short active verb phrase> Actors <list of actors (primary, secondary) involved in use case> Trigger <the action upon the system that starts the use case, may be time event> Short Description (Goal in context in Cockburn UC)

<a longer statement of the goal, if needed>

Priority (in relat-ed information)

<how critical to your system / organization>

System Bounda-ries (Scope & Level in Cockburn UC)

<what system is being considered black-box under design>

Pre-Conditions <conditions on the system when the use case begins> Post-Conditions <conditions on the system when the use case ends>

Steps (Description in Cockburn UC)

Step

Action

<interactions between actors and system that are necessary to achieve the goal> 1 Actor …. 2 System …

Extensions (NEW)

Step Branching Action <extensions, one at a time, each refering to the step of the main scenario> 1a <condition causing branching> : <action> 3a

Variants (Varia-tions in Cockburn UC)

Step Branching Action <any variation of the main steps that will cause eventual bifurcation in the scenario> 1

Exceptional Cases (Exceptions in Cockburn UC)

Step Branching Action < exceptions that will terminate the main steps unsuccessfully (with errors message)> 1a 2a

Non-Functional requirements (NEW)

<to be defined using “Definition_Non-functional_Requirements” document>

Frequency (NEW) <how often it is expected to happen> Issues (Open is-sues in Cockburn

<list of issues to remain to be resolved>

125

UC) Comments

126

Use case “Alternative Thickness”

Use Case UC01 HSM L2 alternative thickness

Actors HSM L2 (STA Setpoint Agent)

MES (Order Management)

Holon System

Trigger T1 Request of alternative thickness from HSM L2

T2 Coil produced (coil removed from coiler)

Short Description

Current prestrip temperature infront of FM is sent by HSM L1 to HSM L2.

HSM L2 verifies the range of the temperature.

If temperature is too low , HSM L2 verifies if rolling can be performed.

If not, the minimum thickness is calculated and sent to the Holon-System together with the prestrip thickness (upper limit).

The Holon-System starts the calculation of alternative thickness based on the production history and the current order situation. The nuw thickness is sent back to the L2 HSM.

Priority High System Boundaries Holon System

Pre-Conditions HSM L2 & MES systems are available

Production history data is available

Post-Conditions Roller table is free and available

HSM L2 & MES systems are still available

Steps

Step Action Trigger T1 1 HSM L2 sends request for alternative thickness to Holon System with the parame-

ters: Alloy code, width, minimum thickness,prestrip thickness, material id. 2 Holon System requests alternative thickness to Historical-DB with the parameters:

Alloy code, width, minimum thickness,prestrip thickness. The historical DB returns a alternative thickness.

If thickness is not found in HDB, HDB notifies alternative thickness Holon

When Holon System is notified, it notifies HSM L2 and the control is passed to the operator

3 The Holon System sends the alternative thickness to the L2 HSM 4 In parallel to step 2 Holon System requests to MES for an available current order

with suitable thickness. Parameters are: Alloy code, width, minimum thick-ness,prestrip thickness,

5 MES sends to Holon System alternative thickness for order.

If thickness is not found in MES, MES notifies alternative thickness Holon.

When Holon System is notified, it notifies HSM L2 and the control is passed to the operator

6 Holon System sends to HSM L2 alternative thickness for order 7 HSM L2 decides which thickness to use depeding on the response times of the

Holon-System and a potential operator input. Trigger T2 8 HSM L2 sends production data of each produced coil to the Holon-System (Data

stored in the HDB)

Extensions Step Branching Action 2a Holon system checks the consistency of all the received thickness. If the thickness is

not ok, it is suppressed from the list

127

4a If thickness is not found in HDB, MES notifies Holon System 4b When Holon System is notified, it notifies HSM L2 and the control is passed to the

operator

Variants Step Branching Action

Exceptional Cases

Step Branching Action 2a No possible connection to production history data ; an error message is sent to Holon

System 2b Production history DB is corrupted ; an error message is sent to Holon System 2c No suitable thickness found in production history ; HDB sends an error message to

Holon System 4a No possible connection to order book data ; an error message is sent to Holon Sys-

tem 4b Order book DB data is corrupted ; an error message is sent to Holon System 4c No suitable thickness found in current order book ; MES sends an error message to

Holon System 5a Thickness received is not plausible ; Holon system sends a message to HSM L2 that

says thickness is not in range

Non-Functional re-quirements

Performance : response time : max 30 seconds

Reliability : see ISO norm

Security : Very important : only authorized persons can have access , secure communications and secure data (backup)

Internationalization :

- metrics system : conversion to L2 regional metrics (ex m to mm, % to ppm, °C to °F, weight)

- language interface (mill operator maintenance) : English, Spanish, German, French

Maintainability :

- maintain and analyze Historical DB

- maintain the user guideline for installation and running

- having coding convention for developers (for example comment and coding in English)

- maintain documentation : Use case, Sequence Chart

Traceability :

- use configurable log mechanism

- systematic use of log mechanism into implementation of the application

- produced logs can be exploited by an analysis tool

- logs are not cancelled by default

- use traceability mechanism between UC , detailed design , implemented function and tests cases ; all links maintained in a traceability matrix

Robustness :

- test that application well runs after an amount of time

- application can handle malformed data without crashing Frequency 2 or 3 times/week Issues <list of issues to remain to be resolved> Comments

128

Use case “Order recovery”

Use Case UC02 Order recovery Actors Quality manager (person) HSM L2, coil yard management, L2 Reheating furnace, (MES System) Trigger Quality manager requests a list of alternative materials for a specific order.

Short Description A coil for an specific order was scrapped and it has to be produced again with highest priority. The quality manager requests a list of alternative materials. The system searches in the RF, HSM and the coil yard for alternative materials. A list of these materials is returned to the quality manager.

Priority high System Boundaries Holon system

Pre-Conditions RHF L2, HSM L2 & Coil Yard Management systems are available

Coil yard management also contains coils in transport between HSM and coil yard.

Post-Conditions HSM L2, RHF L2 & Coil Yard systems are still available (Holon-System does not affects the stabil-ity and operability of the systems).

Steps

Step Action 1 Quality manager sends request for alternative materials to Holon System with the

parameters: List of possible Steel grades, width, width range, thickness, thickness range, strip length, length range

2 Holon System requests the coil yard with the parameters: Steel grades, width range, thickness range .

3 The coil yard system returns a list of coils with the data: Coil id, steel grade, width, thickness, length, final customer yield strength.

4 Holon System requests all coils in production from the HSM L2. 5 The HSM L2 returns a list of coils with the data: Coil id, steel grade, width, thick-

ness, length, final customer yield strength. 6 Holon System requests all slabs in production from the RHF L2. 7 The RHF L2 returns a list of slabs with the data: Material id, steel grade, slab width,

slab thickness, slab length, final customer. 8 Holon System selects those slabs in the RHF with suitable steel grade. For each of

these slabs the HSM L2 will be requested if target width, thickness and length could be rolled

9 The list with the possible materials from coil yard, HSM L2 & RHF L2 is returned to the quality manager

Extensions

Step Branching Action If RHF L2 not able to return an alloy code only instead of the steel grade

Holon system will request the mapping of the alloy code to the steel grade from HSM L2 or MES system.

Variants Step Branching Action

Exceptional Cases

Step Branching Action No possible connection to the coil yard system ; an error message is sent to Holon System

No possible connection to the RHF L” system ; an error message is sent to Holon System

No possible connection to the HSM L2 system ; an error message is sent to Holon System

No possible connection to the Quality Manager ; an error message is sent to Holon System

Requirements Performance : response time : max 30 seconds

129

Reliability : see ISO norm

Security : Very important : only authorized persons can have access , secure communications and secure data (backup). Access rights for the quality manager via user management. Secure communi-cations between Agents/Holons. Secure access to L2 systems via a) locally installed Agents running under admin rights. b) Remote access via user management of the L2 system with secure transmis-sion of login data.

Availability:

Not important.

Internationalization :

- metrics system (SI base units) : conversion to L2 regional metrics (ex m to mm, % to ppm, °C to °F, weight)

- language interface (mill operator, maintenance) : English, Spanish, German, French

Maintainability :

- maintain the user guideline for installation and running

- having coding convention for developers (for example comment and coding in English)

- maintain documentation : Use case, Sequence Chart, Collaboration Diagram (if necessary)

Traceability :

- use configurable log mechanism

- systematic use of log mechanism into implementation of the application

- produced logs can be exploited by an analysis tool

- logs are not cancelled by default (critical requirement)

- use traceability mechanism between UC , detailed design , implemented function and tests cases ; all links maintained in a traceability matrix

Robustness :

- system well runs after an amount of time (to be defined)

- application can handle malformed data without crashing

Interoperability:

Communication with Coil Yard, Level2 systems in different IT landscapes

Deployment:

Flexible within different IT landscapes Frequency 1-2 times/week Affected by Issues

Analysis

130

Use Case “Slab insertion into rolling program”

Use Case UC03 Slab insertion into rolling program Actors Program scheduler (person), , L2 Reheating furnace, L2 HSM, Slab Yard Manager Trigger Program scheduler requests to insert a new production order into an existing rolling program.

Short Description A customer requests to produce an order urgently. The program scheduler tries to insert this order as early as possible into an existing rolling program

Priority high System Boundaries Holon system

Pre-Conditions Slab for this order must be available in the slab yard.

RHF L2, HSM L2 & Slab Yard Management systems are available

Post-Conditions HSM L2, RHF L2 & Slab Yard systems are still available (Holon-System does not affects the stability and operability of the systems).

Steps

Step Action 1 Program scheduler requests to insert a new production order into an existing roll-

ing program to Holon System with the parameters: Production order (contains: Steel grade, slab width, slab thickness, slab length yield strength, strip thickness, strip width, strip length, coiler temperature.

2 Holon System requests the slab yard with the parameters: Steel grade, slab width, slab thickness, slab length. The slab yard system returns a slab id.

3 Holon System requests the RHF L2 for a slot in the current rolling program. Pa-rameters: Steel grade, slab width, slab thickness, slab length. RHF slab exit tem-perature

4 RHF L2 returns a slot within the current rolling program 5 Holon System requests the HSM L2 to check the extension of the rolling program

(roll wear, transition from the neighbouring slots from the inserted one). 6 The HSM L2 returns that the insertion is possible. 7 Holon System returns to the program scheduler the slot id.

Extensions Step Branching Action The Holon function is also able to process a list of production orders instead of one order.

Variants Step Branching Action

Exceptional Cases

Step Branching Action In each of the following cases the Holon system exits the control flow and returns the received error message to the program scheduler. 2a) No possible connection to the slab yard system ; an error message is sent to Holon

System 2b) No slab available in the slab yard. An message is sent to the Holon system.

3a) No possible connection to the RHF L2” system ; an error message is sent to Holon System.

3b) No slot available. an error message is sent to Holon System. 5a) No possible connection to the HSM L2 system ; an error message is sent to Holon

System 5b) Insertion not possible. ; an error message is sent to Holon System

131

Requirements

Performance : response time : max 30 seconds

Reliability : see ISO norm

Security : Very important : only authorized persons can have access , secure communications and secure data (backup). Access rights for the program scheduler via user management. Secure com-munications between Agents/Holons. Secure access to L2 systems via a) locally installed Agents running under admin rights. b) Remote access via user management of the L2 system with secure transmission of login data.

Availability:

Not important.

Internationalization :

- metrics system (SI base units) : conversion to L2 regional metrics (ex m to mm, % to ppm, °C to °F, weight)

- language interface (mill operator, maintenance) : English, Spanish, German, French

Maintainability :

- maintain the user guideline for installation and running

- having coding convention for developers (for example comment and coding in English)

- maintain documentation : Use case, Sequence Chart, Collaboration Diagram (if necessary)

Traceability :

- use configurable log mechanism

- systematic use of log mechanism into implementation of the application

- produced logs can be exploited by an analysis tool

- logs are not cancelled by default (critical requirement)

- use traceability mechanism between UC , detailed design , implemented function and tests cases ; all links maintained in a traceability matrix

Robustness :

- system well runs after an amount of time (to be defined)

- application can handle malformed data without crashing

Interoperability:

Communication with Slab Yard, Level2 systems in different IT landscapes

Deployment:

Flexible within different IT landscapes Frequency 1-2 times/week

Affected by Issues <list of issues to remain to be resolved>

Analysis

132

Use Case “Feedback loop from pickling line to cooling section HSM”

Use Case UC04 Feedback loop from pickling line to cooling section HSM Actors Pickling line L2, HSM CS L2, MES system

Trigger T1 Pickling line L2 strip passed the PL

T2 Strip approaches at HSM L2

Short Description

After the pickling line a measurement device based on magnetic remanence indirectly measures the yield strength along the strip. The aim is to control this signal by exploiting a correlation with the coiling temperature. When a strip passes the pickling line the magnetic remanence and the corre-sponding coiling temperature are stored in an internal database. This information is used by a self-adapting model to determine a coiling temperature setpoint curve for subsequent strips.

Priority high System Boundaries Holon system

Pre-Conditions PL L2, HSM CS L2 ,MES material tracking & magnetic remanence measurement systems are available

Post-Conditions Systems above are still available (Holon-System does not affects the stability and operability of the systems).

Steps

Step Action Trigger T1 a When a strip passes the pickling line PL L2 triggers the Holon system. Parameter:

Strip ID, steel grade, cut length head and tail, measured values along the strip length, elongation values along the strip.

b The Holon system retrieves the coiling temperature of the strip from the HSM L2 database. Parameter: Strip ID.

c The HSM L2 database returns the coiling temperature values along the strip length, target coiling temperature.

d The Holon system requests the tracking information of the strip (cutting, recoiling) from the MES material tracking. Parameter: Strip ID

e MES returns cutting and recoiling information. f Holon system uses the gathered length information to correlate coiling temperature

with the magnetic remanence measurement. The correlated data are stored in an internal database together with a steel grade and the strip thickness.

Trigger T2 2.1 When a strip approaches the HSM the HSM CS L2 requests a coiling temperature

setpoint curve. Parameter Strip ID steel grade, target thickness target coiling tem-perature.

2.2

A model in the Holon System uses steel grade, target coiling temperature and target thickness to find similar strips in the internal database

2.3

based on the correlation data of the similar strips a coiling temperature setpoint curve is calculated by the model.

2.4 Coiling temperature setpoint curve is returned to the HSM CS L2.

Extensions Step Branching Action

Variants Step Branching Action

Exceptional Cases Step Branching Action In each of the following cases the Holon system exits the control flow and informs the operator via HMI

133

1.1a) No possible connection to the PL L2; an error message is sent to Holon System

1.1b) The Holon checks the validity and integrity of the received data.

1.3a) No possible connection to the HSM L2 database; an error message is sent to Holon System

1.3b) Requested Material not found in the L2 database; an error message is sent to Holon System

1.4a) No possible connection to the MES material tracking; an error message is sent to Holon System

1.4b) Requested Material not found in the MES material tracking; an error message is sent to Holon System

1.6) Length information’s of the values are not consistent.

2.2) No sufficient data available for the requested context (steel grade, target thickness, target coiling temperature). an error message is sent to HSM L2 system

Requirements

Performance : response time : max 30 seconds

Reliability : see ISO norm

Security : Very important : only authorized persons can have access , secure communications and secure data (backup). Access rights for the program scheduler via user management. Secure com-munications between Agents/Holons. Secure access to L2 systems via a) locally installed Agents running under admin rights. b) Remote access via user management of the L2 system with secure transmission of login data.

Availability:

Not important.

Internationalization :

- metrics system (SI base units) : conversion to L2 regional metrics (ex m to mm, % to ppm, °C to °F, weight)

- language interface (mill operator, maintenance) : English, Spanish, German, French

Maintainability :

- maintain the user guideline for installation and running

- having coding convention for developers (for example comment and coding in English)

- maintain documentation : Use case, Sequence Chart, Collaboration Diagram (if necessary)

Traceability :

- use configurable log mechanism

- systematic use of log mechanism into implementation of the application

- produced logs can be exploited by an analysis tool

- logs are not cancelled by default (critical requirement)

- use traceability mechanism between UC , detailed design , implemented function and tests cases ; all links maintained in a traceability matrix

Robustness :

- system well runs after an amount of time (to be defined)

- application can handle malformed data without crashing

Interoperability:

Communication with Slab Yard, Level2 systems in different IT landscapes

Deployment:

Flexible within different IT landscapes

Frequency 1.) Every strip in the PL

2.) Every strip in the HSM

134

Affected by Issues <list of issues to remain to be resolved>

Analysis

135

Use case “Energy market orientated production planning”

Use Case Energy market orientated production planning Actors MES Scheduler, HSM L2 Model, Energy provider

Trigger MES program scheduler requests cost optimized time slot from the I2M System for a planned rolling program.

Short Description

A MES system has scheduled a rolling program.

The MES System requests the I2M System requests cost optimized time slot from the I2M System for the rolling program.

The I2M System requests the HSM L2 (RM + FM) model for the required electric energy for each slab in the rolling program. Also is requested the calculated rolling time from the L2 HSM.

Based on these data a cost-optimized time slot is requested from the energy Providers.

The I2M Holon disposes of the necessary negotiation mechanism for a best of all price and time optimization.

Priority High System Boundaries Holon System

Pre-Condition Energy provider with suitable access is available

HSM L2 & MES systems are available

Steps

Action Reaction of the System 1 MES program scheduler requests cost optimized time slot from the I2M System for a

planned rolling program. with the parameters: for each Production order (contains: Steel grade, slab width, slab thickness, slab length yield strength, strip thickness, strip width, strip length, coiler temperature).

2 I2MSystem requests the calculated energy and then production time for each order of the rolling program from the L2 System.with the parameters: Production order (con-tains: Steel grade, slab width, slab thickness, slab length yield strength, strip thickness, strip width, strip length, coiler temperature).

3 I2M System requests a cost optimized time slot for the rolling program from the all connected energy providers

Extensions Action Reaction of the System

Variants Action Reaction of the System

Post-Conditions HSM L2, MES & Provider systems are still available (Holon-System does not affects the stability and operability of the systems).

Non-Functional

Results

Comments

136

Use case “Energy Management”

Use Case Energy Management

Actors

L1 Plant Energy Monitoring data

L2 Process Execution, model forecasting and historical DB & Energy DB

MES (Order Book, Production Planning & Scheduling)

Holon System Trigger Set up of a medium term Production Plan

Short Description

Taking into account the Order Book list with the relevant constraints (lead time, costs, plant limita-tions etc.) a first attempt Production Plan is sent form MES Planning Module to local Scheduling Module. Energy forecast is activated. Scheduling is activated.

Scheduling Plan is verified in order to withstand the existing constraints and to get a “good solu-tion” from the energy demand point of view.

Production Plan is modified in order to realize the production target under an energy efficiency umbrella.

In case of new high priority order, new planning and scheduling takes into account existing orders; maximum productivity is also a constraint.

Priority Medium System Boundaries Holon System; Energy Historical DB

Pre-Conditions HSM L2 & MES systems are available

Production history data is available Post-Conditions The Holon System is continuously active; it is part of the Planning procedure

137

Use case “Rescheduling”

Use Case Rescheduling

Actors Shift operator, Optimization system, Schedule

Trigger Schedule is broken

Short Description • Target schedule can not be fulfilled by current setup, due to malfunctioning or dis-turbance

• Data for replanning is accessed, using material flow (rest), order situation and plant state

• New schedule is generated & optimized, as close as possible to the original sched-ule

Priority High

System Bounda-ries

Optimization system

Pre-Conditions • Day plan exists, is available and optimized • Shift plan exists, is available and optimized • Plant state is known • Ressources = data of material flow and orders is accessible

Post-Conditions Plant is in normal state, convergence into normal schedule

Steps Step Action 1 Acquire day schedule, plant state and shift schedule 2 Rest material is found in both schedules 3 Generate best solution for the rest material based on prior schedules 4 Propagate result back to shift operator

Extensions Step Branching Action

1 Acquire reason for failure

2 Incorporate knowledge on failure reason into new schedule (alternative thickness)

Variants Branching Action

The revised schedule might incorporate an emergency solutions based on a historical database

Non-Functional requirements

Performance: must be fast enough to reschedule the rest material efficiently

Reliability: see ISO norm

Security : shift planning privacy constraints, data constraints

Internationalization: access to plant state data must be regulated by

- metrics system (SI base units) : conversion to L2 regional metrics (ex m to mm, % to ppm, °C to °F, weight)

- language interface (mill operator,maintenance) : English, Spanish, German, French

Maintainability:

- maintain a log archive

- maintain best solutions archive?

Traceability:

138

- log revised schedules and their numerical difference to the original shift plan

- estimate cost impact by projecting a continuation of original shift plan

Robustness:

- delivered schedules must converge back into original day plan!

Frequency 3 to 4 times a week

Issues

Comments

139

Use case “Re-Allocation – BD NORTH”

Use Case Re-Allocation – BD NORTH

Actors Customer Service (person), Non allocated stock (Slabs and Coils), (MES System), Automatic tools for matching (PSE), SAP commercial system

Trigger Automatic Batch Trigger

Short Description With a regular time, a list of non allocated metal units has to be re-allocated on a list orders.

The purpose is to use knowledge of products status to optimize the valorisation of non allocated function that takes into account :

- delay of orders - cost of movements (depiling, trucks) - scraps to be done - over quality - reduce inventory - insure HSM throughput - possibility of repairement

Priority High

System Boundaries Holon system

Pre-Conditions Lists of orders to be produced coming from SAP or internal order

List of non allocated available with their status

Scheduling system

Proximity Search Engine tools (PSE)

GPQS (Global Plant Quality System)

Quality constaints , Process constraints, Restricted list of products and orders

Post-Conditions MES is available and allocation is up dated

Steps Step Action 1 The MES required an update of the allocation of products to orders / orders t products 2 Holon System requests the lists of orders to the MES and to SAP 3 Holon System requests the lists of available products and their location 4 Holon System requests to the scheduler of the HSM products to be rolled 5 Holon System requests all slabs in production from the RHF L2 6 Holon Systems produce an new allocation 7 Holon Systems ask the PSE to evaluate the cost of the new allocation (global cost, and specific cost) 8 Holon Systems ask the GPQS to evaluate the cost of the new allocation (global cost, and specific cost) 9 Holon checks if all cost are acceptable if not new local search can be performed to reduce (for instance

lead time, delay on specific orders,…)

Extensions Step Branching Action

1 Acquire reason for failure

140

2 Incorporate knowledge on failure reason into new schedule (alternative thickness)

Exceptional Cases Step Branching Action

No possible connection to the SAP ; an error message is sent to Holon System

No possible connection to the GPQS” system ; an error message is sent to Holon System

No possible connection to the MES ; an error message is sent to Holon System

Non-Functional requirements

Performance : response time : max 4 hours

Reliability : see ISO norm

Security : Very important : only authorized persons can have access , secure communications and secure data (backup). Access rights for the quality manager via user management. Secure communications between Agents/Holons. Secure access to L3 systems via a) locally installed Agents running under admin rights. b) Re-mote access via user management of the L3 system with secure transmission of login data.

Availability:

Not important.

Internationalization :

- metrics system (SI base units) : conversion to L2 regional metrics (ex m to mm, % to ppm, °C to °F, weight)

- language interface (mill operator, maintenance) : English, Spanish, German, French

Maintainability :

- maintain the user guideline for installation and running

- having coding convention for developers (for example comment and coding in English)

- maintain documentation : Use case, Sequence Chart, Collaboration Diagram (if necessary)

Traceability :

- use configurable log mechanism

- systematic use of log mechanism into implementation of the application

- produced logs can be exploited by an analysis tool

- logs are not cancelled by default

- use traceability mechanism between UC , detailed design , implemented function and tests cases ; all links maintained in a traceability matrix

Robustness :

- test that application well runs after an amount of time

- application can handle malformed data without crashing

Interoperability:

Communication with Coil Yard, Level2 systems in different IT landscapes

Deployment:

Flexible within different IT landscapes

Frequency 1-2 times/week

Issues

Comments

141

Appendix 2

Requirements

142

Main Requirements

ID Requirement UC Description 01 Broker for agents all Agents need to find each other’s

- agent registers with the broker - agent can ask for the existence of others

02 Software agents all 03 Agent marketplace 1 An infrastructure regulating agent negotia-

tions is needed 04 Communication with exter-

nal systems all

05 Communication between installations

6 Communication between agents represent-ing different organisations is needed

06 Communication between agents and the market place

all

07 Persistence all Types of persisted data (apart from logging data): - Business logic data - Record & Replay - Data for supporting agent recovery Control flow starts only from the agents, not from the persistence mechanism.

08 Scalability The platform must be able to maintain oper-ation under increase of load or usage

09 Availability The platform must continue to operate even if a component fails

10 Licensing all The project must evaluate and take into account consequences of software licenses

11 Portability Need for portability was not clearly ex-pressed

12 Security The platform implements access control: identification, certificate-based authentica-tion and authorization

13 Configuration all There will be configurations for agents, mar-ket place and broker.

14 Logging all Actions of each process must be logged. There are three different purposes: - logging for debugging - logging for record & replay - logging for auditing for legal concerns All log entries must have a timestamp.

15 User interface all 16 FIPA compliant all The resulting platform must be able to be

made FIPA compliant. 17 Ontologies all Note: reasoning is (if required) only done on

behalf of the semantic agent, and is not part of the platform

18 Record & Replay all 19 Recovery all

143

Detailed Requiremnts

01 - Broker for agents ID Requirement Description

01.01 There is one logical broker for each installation.

- Each agent needs to register with exactly 1 broker. - The number of registered agents is monitored. - Registration of new agents is denied depending on a configura-ble threshold. In this case the system admin is notified (see FR 02.19). - In case of multiple connected installations then there must be a synchronization between the real brokers.

01.02 Each agent registers with exactly one broker at agent startup.

Upon registry the agent provides the following information: - ID, which is unique in the context of a single installation - Agent type - Services TBD: see also AHDL specification

01.03 An agent is de-registered from the broker at agent shutdown.

TBD: see also AHDL specification

01.04 The broker makes agent infor-mation accessible to other agents selected by agent type.

The broker returns the agent ID(s).

01.05 The broker makes agent infor-mation accessible to other agents selected by services.

The broker returns the agent ID(s).

01.06 The broker gives agents the neccessary information in order to set up communication with other agents and their services.

- Broker publishes services qualities if there are "significant" changes. This includes communication performance (measured or estimated) between the service requesting agent and the broker, where the service offering agent is registered with.

01.07 Message parsing and validation by the broker

Parsing and validation of administration and communication mes-sage structures by quality gates. In case of malformed messages: - Notification of the message sender - Notification of system admin (see FR 02.19) - Denial of processing certain message (see FR 01.08)

01.08 Deny processing of certain mes-sages.

Deny processing of administration and communication messages from certain agents / other brokers and / or certain type.

01.09 Service discovery There is an UDDI-like service discovery, which provides WSDL and URI of Web services that are provided by agents.

01.10 Hosting of a UDDI-like Web ser-vice by the broker

Hosting of a UDDI-like Web service for service discovery by the broker

144

02 Software agents ID Requirement Description

02.01 An agent does not change its information provided to the bro-ker throughout its lifetime except for service quality information.

02.02 Inter-agent communication An agent must be able to send to and receive messages from other agents according to a common protocol.

02.03 An agent has its own control flow.

02.04 An agent can instantiate a new negotiation on the marketplace by providing the type of negotia-tion and the configuration of the negotiation protocol including the description of the negotiation item, proposal template and agreement template.

02.05 An agent must be able to ask the marketplace for the available negotiation types and the corre-sponding configuration.

02.06 Agents can subscribe to certain negotiation types bound to cer-tain attributes.

02.07 Agents can demand admission to a negotiation in compliance with negotiation-sepecific admission policies.

In order to become a negotiation participant an agent must per-form admission.

02.08 Agents can demand to withdraw an own, previously granted ad-mission for a negotiation.

In order to leave the negotiation before it is closed an agent must withdraw its admission. Success of withdrawing an admission may depend on the negotiation type, configuration and current state of negotiation.

02.09 Agents can submit proposals to the market place.

Each negotiation has a unique ID. Proposal submission is only accepted if the corresponding negotiation is open.

02.10 Agents can demand to withdraw own, previously submitted pro-posals from the market place.

Success of withdrawing a proposal may depend on the negotia-tion type, configuration and current state of negotiation.

02.11 Agents can query for their own agreements.

Completeness of returned agreements may depend on the cur-rent state of negotiation. However, it must be guaranteed that the complete set of agreements for the querying agent is availa-ble after the negotiation is closed.

02.12 Lifecycle of Agents - It must be possible to generate agents and notify them to shut-down via a "shutdown pending" event at runtime of the platform. - During life-time agents periodically communicate alive messag-es to the watchdog (see FR 19.02).

02.13 Startup of agents There can be more than one instance of an agent type at startup time.

02.14 Cloning of agents An agent can clone itself. 02.15 Agent clock Agents must have access to a clock, which is synchronized with

the clocks of other agents by memorizing time offsets within the agent, thus not affecting the system clock.

02.16 Agent clock synchronization in-stance

Virtually centralized component that is responsible for synchro-nizing the agent clocks.

02.17 Publish agent services Service offering agents periodically / on changes publish the list of available services (see FR 02.21) to their broker (e.g. quality comprises the performance of agent business logic on the current hardware, and the communication performance between the service offering agent and the broker, where the agent is regis-tered).

02.18 Subscribe to services Service consuming agents can query agents according to their services and thus also subscribe to service change notifications of the corresponding service offering agents by the broker (see FR 01.09).

02.19 Agent(s) for notifying system admin

There are specialized agents that can be used to notify the sys-tem administrator about system events such as system crash (e.g. sends emails).

145

02.20 Agent(s) for load balancing There is a specialized load balancing agent, which subscribes with service quality change notifications of all other agents, and manages load balancing by e.g. starting new node instances and cloning agents.

02.21 Web service hosting by agents Hosting of business domain Web services by agents 02.22 Test agents Standard test agents, which are deployed on each installation by

default 02.23 Thread management for inter-

agent communication Received messages are handled by a dedicated thread-management that allocates messages to threads.

02.24 Message compression for inter-agent communication

The messages can be compressed and transfered in compressed form.

02.25 Streaming for inter-agent com-munication

There can be n-to-m streaming connections between agents.

02.26 State machine for agents Agents can have multiple states, which must be recognizable by the AgentLifeCycleManager(s).

02.27 Services used by platform com-ponents

Definition: linking platform components (e.g. watchdog, broker) with and specialized agents (e.g. Email notification agent, persis-tence, time sync.):

02.28 Develop specialized agents: Notification and Email Agent

146

03 Agent marketplace ID Requirement Description

03.01 The marketplace must be able to load and manage different nego-tiation types.

Such a type is a specific implementation of a negotiation (e.g. a 1-to-1 bargaining negotiation or a n-to-m continuous double auction). It must provide the following functionality. - Defining and retrieving an unique negotiation ID - Defining and retrieving the configuration of the negotiation protocol in YAML including the description of the negotiation item, proposal template and agreement template (see also 02.04). - Managing the current state of negotiation, i.e. open for admis-sion, open for submission, completed - Processing requests for initialization of the negotiation, agent admission and withdrawal of admissions, proposal submissions and withdrawal of proposal submissions, matching of proposals according to negotiation-specific policies and algorithms and provision of agreements to agents - Managing and retrieving of historic proposal submissions and agreement data Furthermore, negotiations can be configured in a way that is specific to the implementation of a certain negotiation type. Such configurations shall be defined in YAML and contain use-case specific policies in addition to (03.01). An example would be "allow at most 10 agents and open pro-posal submission once there are at least two agents admitted". The market place is multi-threaded.

03.02 The marketplace must enforce compliance with a defined se-quence of agent interaction in negotiations.

The following policies are the minimum constraints for agent interation sequences. Each negotiation implementation may de-fine additional policies on top. '- Instantiation of new negotiations (02.04), query for available negotiations (02.05 and agent subscription (02.06) can be exe-cuted independently from any other interactions. - Agent admission (02.07) must be done before any of the follow-ing actions can be performed. - Withdrawing an admission (02.08) requires to be admitted. - Submitting proposals (02.09) requires to be admitted. - Withdrawing a proposal (02.10) requires to be admitted and to have submitted this proposal before. - Querying agreements (02.11) requires to be admitted.

03.03 The marketplace must be able to host multiple negotiations each allowing multiple agents to par-ticipate at the same time by submitting one or multiple pro-posals.

Each negotiation, participant and proposal has an ID, which is unique in the context of a single installation.

03.04 The marketplace must be able to clear proposals by forming agreements.

Market clearing is done by running a clearing mechanism that is specific to the type of negotiation. Clearing may be done either continuously upon submission of proposals, at certain points of time / upon certain negotiation-specific events during negotiation or immediately after all proposals are submitted.

03.05 The marketplace must be able to open negotiations.

Negotiations are opened immediately after meeting all required negotiation type-specific preconditions such as required elapsed time or a minimum number of admitted agents.

03.06 The marketplace must be able to close negotiations.

Negotiations are closed immediately after meeting all required negotiation type-specific preconditions such as elapsing a fixed termination time or forming a minimum number of required agreements.

03.07 Notification on negotiation in-stantiation

The marketplace notifies agents, which are subscribed to negotia-tions of certain type and attributes, of the IDs of matching nego-tiations upon instantiation.

03.08 Notification on negotiation open-ing

The marketplace notifies agents of negotiation opening if they are admitted to corresponding negotiations.

03.09 Notification on negotiation closing The marketplace notifies agents of negotiation closing if they are admitted to corresponding negotiations.

03.10 Notification on admission of other agents

Depending on the negotiation protocol and security restrictions, the marketplace notifies an agent admitted to a negotiation of other agents' admission to the same negotiation.

147

03.11 Notification on admission with-drawal by other agents

Depending on the negotiation protocol and security restrictions, the marketplace notifies an agent admitted to a negotiation of other agents' admission withdrawals from the same negotiation.

03.12 Notification on proposal submis-sion of other agents

Depending on the negotiation protocol and security restrictions, the marketplace notifies an agent admitted to a negotiation of other agents' proposal submissions to the same negotiation.

03.13 Notification on proposal with-drawal by other agents

Depending on the negotiation protocol and security restrictions, the marketplace notifies an agent admitted to a negotiation of other agents' proposal withdrawals from the same negotiation.

03.14 Notification on own agreements The marketplace informs agents about own agreements immedi-ately when they are formed.

03.15 Notification on agreements of other agents

Depending on the negotiation protocol and security restrictions, the marketplace notifies an agent admitted to a negotiation of other agents' agreements formed in the same negotiation.

03.16 Shutdown of market place When the market place receives a "shutdown pending" event, it stops all negotiations, persists its state and shuts down itself.

04 Communication with external systems ID Requirement Description

04.01 Integration must be flexible in terms of operating systems, interfaces and communication protocols.

Known examples: - Operating systems: Linux, Unix, VAX, Windows - Interfaces: SQL, MQSeries, SAP, SOAP, Rest/JSON - Protocols: TCP/IP, CORBA Telegrams, OPC DA and UA - Systems: MES using message queue, Oracle L3 Database, sen-sors, Siemens TDC

04.02 Integration details must be known at compile time.

04.03 Communication with external systems must comply to their specific policies.

TBD: must be specified when we have the pilot plant

04.04 External information must be translated into internal data that can be processed by agents.

Note: concrete (external) agents are responsible for data format transformations and semantic / ontology agents are responsible for semantic data transformations

04.05 Local pre-processing of data coming from external systems.

Preprocess data as close to the external as possible in order to minimize network traffic.

05 Communication between installations ID Requirement Description

05.01 Brokers can identify each other and communicate with each other. In particular, they syn-chronize agent registry infor-mation.

05.02 Agent communication through brokers

TBD: see also AHDL specification

06 Communication between agents and the market place ID Requirement Description

06.01 Support proposal templates. Proposal templates are required to restrict the possibilities of agents in defining proposals.

06.02 Support high expressiveness of proposals for different use cases

Support bids and asks with an expressiveness that allows specify-ing single value proposals, combinatorial proposals and utility functions.

06.03 Support agreement templates. Agreement templates are required to enable agents to under-stand the structure of agreements.

06.04 Support high expressiveness of agreements for different use cases

Support an expressiveness that allows specifying agreements, which are formed based on single value proposals, combinatorial proposals and utility functions.

06.05 Message parsing and validation by market place

Parsing and validation of negotiation message structures by qual-ity gates. In case of malformed messages: - Notification of the message sender - Notification of system admin (see FR 02.19) - Denial of processing certain message (see FR 06.06)

06.06 Deny processing of certain mes-sages.

Deny processing of negotiation messages from certain agents and certain type.

148

07 Persistence ID Requirement Description

07.01 There are specialized agents that abstract from persistence by mediating persistence requests of data from other agents to a local storage mechanism.

Among other specialized agents there must be an agent that stores online help content related to a unique online help content ID. Is not subject to D'ACCORD, but to specialized agents.

07.02 The maximum amount of persist-ed data per agent is 1GB.

07.03 The data to be stored has to be structured.

07.04 The maximum data update fre-quency is 2 rows per seconds.

Assumed number of columns: 20 to 50 Is not subject to D'ACCORD, but to specialized agents.

07.05 A specialized persistence agent is able to semantically transform data (e.g. unit transformation) using a semantics data transfor-mation agent on-the-fly on read and write data access.

Is not subject to D'ACCORD, but to specialized agents.

07.06 The persistence agent must be capable to execute data analytics functionality provided by busi-ness logic agents on data. There must be some standardized se-mantics for sending data analyt-ics functionality.

Bring the algorithm to the data in order to reduce the amount of transfered information.

149

10 Licensing ID Requirement Description

10.01 Technical licensing issues There will be licensing for parts of the platform. There is no need for separate licensing of applications built with the platform.

10.02 Other legal issues Licenses of additional components / functionalities must be com-pliant with licensing scheme of the whole platform. - Licenses of external components that communicate with the platform via network protocols. - Licenses of components that are eagerly coupled with the plat-form, i.e. internal agents via dll import as well as additional plat-form modules that extend the platform functionality.

13 Configuration ID Requirement Description

13.01 Logically centralised configuration management

A logical central mechanism handles configuration data and sends it to each agent.

13.02 Decentralized configuration data storage

The configuration of each agent is stored in a local configuration file.

13.03 Configuration through files In general, configuration is expressed in YAML files, but for spe-cific needs by agents any other configuration format shall be usable by agents on their behalf

13.04 Separation of administrative configuration and application functionality configuration

Configuration comprises agent administrative information and configuration that is directly related to application functionality. Separation must either be done by separating into two different blocks within the same configuration file or using different con-figuration files.

13.05 Platform provides config parser The platform provides a YAML parser. 13.06 Ease of configuration There must be an editor available for YAML files. 13.07 Configuration of cloned agents The configuration of the cloned agents must be provided upon

requesting the clone. 13.08 Configuration of the broker TBD: define details here:

The configuration may contain: - max. number of accepted agents per type - max. number of accepted types - communication information (including protocols) - securtity setup

13.09 Configuration of the market place TBD: check whether this requirement is really needed 13.10 Configuration of the startup

mechanism of the installation Configuration is organized according to a list of name-value-pairs: [agent executable / type] [URI of the configuration file]

14 Logging ID Requirement Description

14.01 All log message must be output in human-readable / plain text

14.02 Local logging The market place, the broker and each agent can log locally and there output is written to log files.

14.03 Central log repository Agent logs are available in a central point. 14.04 Configurable logging Actual log data can be enriched by adding attributes for catego-

rizing the log message. Logging can be configured according to verbosity levels debug, info, warn, error and fatal. This configura-tion can be set separately for each agent and broker.

14.05 There must be separate log out-put for each agent.

14.06 There must be separate log out-put for the market place.

14.07 There must be separate log out-put for the broker.

TBD: verify the neccessity to log broker states

14.08 Logging of exceptions 14.09 Deadlock hints Runtime support for bug-tracking by deadlock hints

150

15 User interface ID Requirement Description

15.01 Support different kinds of user interfaces for different user groups: expert / end-user, ad-ministrator, developer.

There is a hierarchy of quantity of information starting from ex-pert / end-user -> administrator -> developer. Role management based on a configurable assignment of user Ids to one or multiple roles.

15.02 Admin / developer mode user interface

Developer mode must contain at least the following information: - overview of all registered agents of one broker + their status - software version infos of all known components (agents, ser-vices?)

15.03 Visualization in the user interface is decoupled from the actual data to be visualized.

Agents provide the data and the platform and / or an external system provides the visualization means

15.04 Visualization data from different agents can be visualized in the user interface within the same view.

15.05 The platform is responsible for deliverying the agent's user in-terface visualization data to the view and the user's input data from the view to the agent.

15.06 User interface for market place information

At least list the ongoing negotiations and their current states.

15.07 Agent that displays online help On request for online help with the corresp. ID other agents can demand displaying online help by an online help agent within its GUI.

17 Ontologies ID Requirement Description

17.01 Support static lookup of ontolo-gies (TBOX)

Enable the Semantics WP: The platform should not hinder ontol-ogy lookup in this case.

17.02 Support transfer of ontology instance data between agents (ABOX)

18 Record & Replay ID Requirement Description

18.01 Recording internal states of agents

18.02 Recording the outgoing traffic from agents to agents as well as from agents to external systems

18.03 Recording the incoming traffic from agents to agents as well as from external systems to agents

18.04 Recorded data can be replayed 18.05 Replay can be based on changed

input parameters

18.06 Replay must be possible in fast forward mode

18.07 Recording can be started and stopped at any time from a cen-tral instance for one installation, which requires time synchroniza-tion between the components in the installation

18.08 Log output during replay must be the same as the log output dur-ing the recording step (except for additional recording-related log entries) unless input parameters are changed

18.09 Replay is possible on a different instance of the recorded installa-tion

151

18.10 It shall be possible to record and / or replay only a part of the installation

19 Recovery ID Requirement Description

19.01 Agents can have multiple states, which must be persisted by the platform.

See also "robustness" criterion.

19.02 The broker runs a watchdog, which can detect that an agent is not running in normal behavior. The watchdog can restart an agent using its last persisted agent state.

The watchdog executes configurable rules and performs actions (e.g. agent shutdown --> FR 02.12, notifications --> FR 02.19).

19.03 The watchdog can be configured. Watchdog configuration contains the information which agents must be synchronously set to their agent states corresponding to the same timestamp if one of the agents in this group needs to be recovered.

19.04 Marketplace can have multiple states, which must be persisted by the platform.

19.05 Start up shadow process. A node (including all agents and the market place) or an external agent can start a shadow process for itself.

19.06 Hand-over to shadow process. - Process collects current configuration and state of all compo-nents and provides this data to shadow Process via direct hand-over. - Process hands over registered external agents and operator GUI control from the main to the shadow process if applicable.

19.07 Handle communication buffer overflows.

Exception handling of communication overflows: notify system admin (FR 02.19)

19.08 Self-health check - Self-health check in inside-out manner for each agent, broker and the market place - Each agent must also check the availability of its connections to external systems and / or open connections to other agents - Centralized aggregation component that collects the distributed self-health checks (can be agent or watchdog in the broker, etc)

152

153

Appendix 3

Agent platform selection

154

REQUIREMENTS 100,00% 34

1 100% 1 1

Directory service for agents 4 Agents need to find each others

2 100% 2 2

Agent negotiation 4 Agents need to negotiate

Agent marketplace 4 An infrastructure regulating Agents negotiations is needed

3 100% 1 1

Agent decision 4 Agents can decide according to a function and to rules

4 100% 1 1

Agent provides interface 4 Each agent exposes an interface allowing to communicate

5 100% 6 6

Agent communication 4 Agents need to communicate each other and require a common language

Inter levels communication 4 Agent communication with levels 2,3 & 4 is necessary

Inter plants communication 4 Communication between Agents representing different organisations is needed

Access to external systems 4 Agents need to access data from databases and applica-tions of plants

Resources for communication 4 In several scenarios local programs are necessary to con-nect

Agent for communication 4 When local processing is required for communication, an agent is required, not a service

6 100% 1 1

Agent memory 4 Agents must store (« persist ») information to be used later

7 100% 1 1

Scalability 4 The platform must be able to maintain operation under increase of load or usage

8 100% 1 1

Availability 4 The platform must continue to operate even if a component fails

10 100% 1 1

Portability 4 Need for portability was not clearly expressed

11 (Security) 100% 2 2

Access Control 4 The platform must implement access control

Security in communication 4 Actions on security must be logged

12 (Configurable application) 100% 8 8

Customisation 4 The platform can be configured and adapted to customers needs

Agent defined configuration 4 Each agents have its own configuration, stored locally

Centralised config. Management 4 A central mechanism handle configuration data and send it to each agent

Configuration through files 4 Configuration is expressed in structured text files (XML

155

format strong candidate)

Platform and features config. 4 Configuration includes technical configuration (ex : envi-ronment variables) and functional configuration

Platform provides config parsing 4 Configuration parsing tools are available for programing

Ease of configuration 4 Configuration should be easy

Security in config. management 4 Credentials have to be securely stored and provided to Agents

14 100% 1 1

Lifecycle of Agents 4 It must be possible to generate and kill Agents

15 100% 1 1

Use of Agents for optimisation 4 Agents do not represent services but rather entities

16 (Logging) 100% 4 4

Local logging 4 Agents log locally, for example in log files

Central log repository 4 There is need to gather all logs in a central point, for exploitation

Configurable logging 4 Logging can be configured according to criteria, typically categories,

Replayable logging 4 The platform must log enough information to be able to replay situations

17 (Visualization) 100% 2 2

Visualization 4 Some information needs to be displayed

Agent visualization capabilities 4 Broker Agent will provide some vizualization and so can do, if necessary, some agents

18 100% 1 1

FIPA Compliant 4 The resulting platform must be FIPA compliant

156

JADE DACCORD JIAC

REQUIREMENTS 70% 34 72% 34 63% 34

1 100% 1 1 100% 1 1 100% 1 1 Directory service for agents 4 4 4

2 75% 2 2 100% 2 2 75% 2 2 Agent negotiation 4 4 4 Agent marketplace 2 4 2

3 100% 1 1 75% 1 1 100% 1 1 Agent decision 4 3 4

4 100% 1 1 100% 1 1 100% 1 1 Agent provides interface 4 4 4

5 38% 6 6 50% 6 6 33% 6 6 Agent communication 4 4 4 Inter levels communication 0 0 0 Inter plants communication 0 0 0 Access to external systems 2 3 1 Resources for communication 0 2 0 Agent for communication 3 3 3

6 50% 1 1 50% 1 1 50% 1 1 Agent memory 2 2 2

7 100% 1 1 100% 1 1 100% 1 1 Scalability 4 4 4

8 50% 1 1 50% 1 1 25% 1 1 Availability 2 2 1

10 100% 1 1 50% 1 1 100% 1 1 Portability 4 2 4

11 (Security) 38% 2 2 38% 2 2 38% 2 2 Access Control 0 0 0 Security in communication 3 3 3

12 (Configurable application) 63% 8 8 78% 8 8 47% 8 8 Customisation 3 3 3 Agent defined configuration 3 4 3 Centralised config. Management 4 3 1 Configuration through files 4 4 4 Platform and features config. 0 2 0 Platform provides config parsing 0 3 0 Ease of configuration 4 4 2 Security in config. management 2 2 2

157

14 100% 1 1 50% 1 1 100% 1 1 Lifecycle of Agents 4 2 4 15 100% 1 1 100% 1 1 100% 1 1 Use of Agents for optimisation 4 4 4 16 (Logging) 81% 4 4 69% 4 4 63% 4 4 Local logging 4 4 4 Central log repository 4 2 2 Configurable logging 2 3 2 Replayable logging 3 2 2 17 (Visualization) 100% 2 2 100% 2 2 100% 2 2 Visualization 4 4 4 Agent visualization capabilities 4 4 4 18 100% 1 1 100% 1 1 100% 1 1 FIPA Compliant 4 4 4

158

159

Appendix 4

Details of the

software architecture

160

Exemplary description of the D´ACCORD Architecture

During the evaluation of existing agent platforms, the consortium decided to use the platform

D´ACCORD as a starting point for the development. Figure 1 yields a schematic view on the archi-

tecture of this platform. In the following part of the appendix, we will briefly describe the depicted

modules and parts of the system:

Local Agent Manager

Some External Agent

DACCORD.AgentManagement

*

DACCORD.Agent

SomeExternalAgentNS

<<interface>>IAgent

Broker

<<interface>>IAgentHost

<<simplex channel>>ExtAgent -> Node

<<simplex channel>>Node -> ExtAgent

DACCORD.Broker

SomeInternalAgentNS

Some Internal Agent

Internal Agent Host

Remote Node Manager

DACCORD.InterNodeManagement

NodeContainer

*

<<simplex channel>>Node -> Node

<<interface>> INegotiation

DACCORD.MarketMechanism

DACCORD.NegotiationManagement

Local Negotiation Manager

*

DACCORD.Agent.Interface

External Agent Proxy

External Agent Host

Node Proxy

Middleware

Figure 55. Architecture of the agent platform D`ACCORD.

The baseline architecture in Figure1 shows the following functional building blocks of a D’ACCORD

node.

Components “Some Internal Agent” and “Some External Agent”: The system supports two

types of agents: so-called internal and external software agents. Internal agents are instantiated

by a D’ACCORD platform node and thus run in the node’s address space, whereas external agents

are external processes that interact with D’ACCORD via network protocols. Both types of agents

can be added and removed at runtime. An agent always registers with a single D’ACCORD node.

Any node can be part of a mesh of nodes. D’ACCORD enables the agent to interact with other in-

ternal or external agents in a homogeneous way regardless of which node the agent originally reg-

istered with.

IAgent: Both types of agents (i.e. “Some Internal Agent” and “Some External Agent”) implement

the IAgent interface, which is accessed by D’ACCORD for agent interaction and negotiation. Among

others, the interface declares members for agent lifecycle management, agent communication, and

notification of negotiation events.

161

IAgentHost: Each agent is assigned a host object, which serves as the interface for agent interac-

tion and negotiation with other agents through D’ACCORD. There are two different realizations

“Internal Agent Host” and “External Agent Host” to account for supporting both types of agents.

Internal Agent Host: This class is provided by D’ACCORD and realizes the IAgentHost interface.

It is only used by an internal agent and forwards its requests to the “Local Agent Manager” for fur-

ther processing.

External Agent Host: This class is provided for an external agent. It also realizes the IAgentHost

interface. It is only used by an external agent and forwards its requests via the “ExtAgent Node”

middleware channel.

ExtAgent Node Channel: This channel is provided by the communication middleware and con-

nects an external agent to the Local Agent Manager of the D’ACCORD node, the agent is registered

with.

Local Agent Manager: The local agent manager keeps track of administrative information of all

agents that are registered with this node regardless of whether they are internal or external

agents. In particular, it routes messages from other nodes, from other agents or from the market

place to locally registered agents by calling the corresponding IAgent realization in the node’s ad-

dress space. For delivering a message to an internal agent the corresponding “Some Internal

Agent” realization is called directly because the internal agent itself is available in the node’s ad-

dress space. However, in case of an external agent the “Some External Agent” class cannot be

called directly, because this class is instantiated in another address space out of direct scope of

D’ACCORD. To this effect, D’ACCORD makes use of the proxy pattern and calls the “External Agent

Proxy” instead.

External Agent Proxy: Objects of this proxy class behave like any other agent by implementing

the IAgent interface. However, their only task is forwarding messages from a D’ACCORD node to

an external agent via the “Node ExtAgent” channel.

Node -> ExtAgent Channel: This communication channel is provided by the middleware. It

transmits messages from a D’ACCORD node to an external agent, and calls the corresponding

IAgent interface method realized by the “Some External Agent” class in the external agent’s ad-

dress space.

Broker: The broker takes messages from agents and decides whether to forward them to the “Lo-

cal Negotiation Manager” in case of negotiation requests related to local negotiations or to forward

them to an agent / negotiation registered with another node. In the latter case the Broker calls the

“Remote Node Manager” of its own D’ACCORD node.

Remote Node Manager: This component keeps track of other D’ACCORD nodes. It is called by

the broker to deliver messages such as administration, agent interaction and negotiation messages

to other nodes. The Remote Node Manager communicates with other nodes by “Node Proxy” ob-

jects.

Node Proxy: Similar as the “External Agent Proxy” serves as local representative for an external

agent, the “Node Proxy” serves as local representative for another node and forwards messages to

the “Node Node” channel of the communication middleware.

162

Node Node Channel: This middleware channel connects the “Node Proxy” to a remote

D’ACCORD node and forwards messages via the remote node’s “Remote Node Manager” to the

remote node’s Broker, which then decides about delivering the messages to internal or external

agents or – in case of a negotiation request message – to the “Local Negotiation Manager”.

Local Negotiation Manager: Each D’ACCORD node has a local negotiation manager. It is the

component that enables agents to participate in negotiations. It forwards negotiation request mes-

sages (e.g. proposal submissions) from the Broker to a particular negotiation by calling the negoti-

ation’s INegotiation interface, and forwards negotiation events (e.g. agreements) from the negotia-

tion to the Broker for further delivery to the agents that registered as recipients for such events.

INegotiation: The INegotiation interface is realized by negotiations. A negotiation can be plugged

into D’ACCORD at runtime by dynamically loading it into a node’s address space similar to the pro-

cess of adding internal agents. Each negotiation defines a negotiation protocol that must be obeyed

by participating agents, a negotiation subject, agents are competing for, and custom negotiation

properties, e.g. for defining the negotiation start and end or the delivery time of the negotiation

subject. Common and known negotiations comprise implementations of bargaining, English, Dutch

auctions, double auctions, etc.

163

Appendix 5

Agent and Holon

Definition Language

(AHDL)

164

Agent and Holon Definition Language – AHDL

The agent and holon definition language is the internal language of I2MSteel. For a better under-

standing of the underlying structure, it is helpful to collect methods and variables in pseudo-code.

In the following paragraph we briefly summarize important parts of our registration/de-registration

message concepts.

Excerpt of AHDL language classes in pseudo-code:

AHregister { id Int32; // id of the agent/holon name char[21]; // name of the agent/holon category char[101]; // category of the agent/holon location char[21]; // physical location of the agent/holon description char[101]; // short description of the agent/holon address char[51]; // address of the agent/holon e.g. IP:Port or http

}; AHderegister {

id Int32; // id of the agent/holon };

AHservice {

id Int32; services AHserviceDescription; scope char[21]; ownership char[41]; // owner of the service

}; AHserviceDescription {

name char[21]; type char[21]; ontology char[21]; // ontology supported by the service language char[21]; // language supported by the service access Int16; // 1=direct access 2=access via broker 3=both

};

AHreturn {

id Int32;

retVal Int16;

text char[41];

};

AHsearchService {

serviceName char[21]; // name of the requested service

category char[101];

};

AHserviceList {

noOfElements Int16;

165

AHserviceInfo serviceInfo[noOfElements];

};

AHserviceInfo {

id Int32;

serviceName char[21];

category char[101];

address char[51];

access Int16;

ontology char[21];

language char[21];

security Security;

};

AHrequestService {

id Int32;

security Security;

serviceName char[21];

addInfo AddInfo;

};

AHAddInfo {

cmtype char[8];

cmchart char[500000];

};

AHrequestedServiceInfo {

id Int32;

serviceName char[21];

requestedservice AHrequestedService;

};

AHrequestedService {

cmtype char[8],

cmchart char[500000]

};

AHcommonServices {

name char [21];

}

AHserviceReturn {

cmtype char[8];

cmchart char[500000];

};

166

167

Appendix 6

Report on related

Industrial Standards and Ap-

proaches

168

Related Industrial Standards and Approaches

Sonja Zillner (Siemens AG), Alexander Ebel (BFI)

Workpackage 2, Task 2.4

Date 22.02.2012 Version V.6

Summary

Within this report, we provide a comprehensive analysis and evaluation (in

terms of domain coverage) of existing relevant industrial standards in the man-

ufacturing domain for products and processes along the production line. The

main focus of this report is to establish the basis for identifying and selecting

relevant domain knowledge models that can be re-used and/or adapted for de-

scribing the I2MSTEEL demonstration scenario.

169

Table of Contents

1 Introduction 1.1 Semantic Modelling in the Steel Production Domain 170

1.2 Benefits of using Ontologies 170 1.3 Goal of our work 171

2 Industrial Standards 171

171

172 173

176

2.1 ANSI/ ISA-88 and ISA-95

2.1.1 ANSI/ISA-88

2.1.2 ANSI/ISA-95

2.2 PSL 2.3 ESIDEL (European Steel Exchange Language)

2.4 OLE (OLE for process control) 177

3 Related Applications /Use Cases 179

3.1 Ontology for Planning and Scheduling in Primary Steel Production 179

3.2 MASON – MAnufacturing Semantic Ontology 180 3.3 ADACOR – Adaptive Holonic Control architecture for distributed manufacturing systems 180

4 Enterprise and Foundational Ontologies 181

4.1 Enterprise Ontologies 181

4.2 Foundational Ontologies 181

5 References

Table of Figures

Figure 1. Scope and Focus of ISA-88 and ISA-95 ...................................................................... 172

Figure 2. Information exchange categories according to ANSI/ISA-S95.00.01-2000 ................... 174

Figure 3. Main ESIDEL cycles .................................................................................................... 176

Figure 4. Typical setup of an OPC installation ............................................................................ 179

170

182

175

170

15 Introduction

15.1 Semantic Modelling in the Steel Production Domain

The nature of current steel production environment is inflexible and yield many roadblocks for the seamless and agile cooperation and information exchange. Due to the large variety of plants, such as slow metallurgical process engineering plants like blast furnaces or integrated mills, and pro-cesses, such as the process of iron making (conversion of ire to liquid iron), steelmaking (conver-sion of pig iron to liquid steel), casting (solidification of liquid steel), roughing rolling / billet rolling / cold rolling (reducing size of blocks) or the process of product rolling (establishment of finished shapes), various systems and processes need to be semantically aligned. However, existing auto-mation and IT environments in steel production are inflexible as the various systems rely on their own standards. To enable global optimization of the overall steel production value chain, the high-level information exchange across system borders, locations and companies is required. For achiev-ing this, one needs to manage vast amounts of different kinds of information and knowledge in a very precise and standardized (commonly agreed) way. In addition, for being able to compute the global optimization of the production process, one depends on the exchange of not only production and process information, but also of “higher value information”, such as functional specifications, requirements, decision rational, and engineering knowledge. Doing so provides the basis for the intelligent integration and reasoning over distributed information assets of the different players along the steel production value chain to achieve the optimal steel production system performance.

15.2 Benefits of using Ontologies

Ontologies seem to be the appropriate mechanism for addressing the mentioned challenges. In particular, as ontologies address two major challenges with respect to the shareability and reusa-bility of knowledge:

• The first challenge is lack of interoperability among various applications and systems thatare part of the steel production system. In order to optimize the total supply chain, statusinformation that is retrieved by one system must be made available to the other systems inthe supply chain. However, such interoperability is hindered because different applicationsand systems may use different terminologies and representation of the domain. Even whenapplications use the same terminology, they often associate different semantics with theterms. This clash over the meaning of terms prevents the seamless and agile cooperationand exchange of information among applications and agents. The flexible and ad-hoc ex-change of information along the steel production value chain requires some way of explicit-ly specifying the terminology of the participating applications, sectors, and systems in anunambiguous fashion.

• The second problem faced by along the steel production value chain today is a lack of reus-ability. The knowledge bases that capture the domain knowledge of engineering and indus-trial applications are often tailored to specific tasks and projects. When the application isdeployed in a different domain or connected to a new supplier, it does not perform as ex-pected, often because assumptions are implicitly made about the concepts in the applica-tion, and these assumptions are not generic across domains.

Ontologies consist of a vocabulary along with some specification of the meaning or semantics of the terminology within the vocabulary. In this way, ontologies support interoperability by providing a common vocabulary with a shared semantics. Rather than develop point-to-point translators for every pair of applications, one simply needs to write one translator between the application’s ter-minology and the common ontology. Similarly, ontologies support reusability by providing a shared understanding of generic concepts that span across multiple plants, processes, projects, tasks and environments.

As the use of ontologies in the manufacturing domain as well as in the more specific domain of steel production, is still a relatively new application, there are only few relevant ontologies availa-

171

ble for that domain. The ontologies, models and standards currently used in industrial practice and of interest for I2M-Steel project’s goals are the result of non-coordinated ad-ho efforts that hinder interoperability among industrial communities. They address different communities, domains and tasks, as well as cover different levels of detail and are represented in different representation lan-guages.

15.3 Goal of our work

The goal of this work package is to formulate a comprehensive semantic model (common lan-guage) that covers all relevant aspects that need to be reflected to establish the basis for seamless information exchange in order to optimize the global steel production workflow. The formulated semantic model will define a precise and at the same time generic representation of the existing high-level knowledge of the steel production domain. For doing so, we require the combination of multiple ontologies that address partial aspects of the I2M Steel use case scenarios.

Several ontologies, standards or application scenarios can be (partially) reused or used as inspira-tion for engineering a generic semantic model for the steel production. In this deliverable, we will outline the state of the art of industrial standards, ontologies and accomplished application scenar-ios that will be of interest and relevance for the I2M project. The following section describes exist-ing industrial standards that cover the used concepts and terminologies required for seamless in-formation exchange between the involved systems and applications along the overall steel produc-tion workflow. In the third section, we will describe three related research approaches that estab-lished and implemented an ontology for the steel production domain. And we will finish this report by providing a rough overview of existing enterprise ontologies covering concepts and terminology of generic enterprise processes as well as foundational ontologies establishing meta-models that help to integrate various knowledge models in an integrated manner.

16 Industrial Standards

16.1 ANSI/ ISA-88 and ISA-95

The two standards ANSI/ISA-88 and ANSI/ISA-95 are closely related. ANSI/ISA-95 is the interna-tional standard for the integration of enterprises and controls systems, and ANSI/ISA-88 contains models and terminologies for controlling batch processes. The reason for their similarity (or maybe better relatedness) is due to the fact, that many of the ANSI/ISA-95 committee members have also been actively involved in developing the older ANSI/ISA-88 standard for batch control.

However, both standards differ in terms of their purpose, such that manufacturing companies are more and more making use of both standards.

• ANSI/ISA-88 is used for automating the control of machines and devices• ANSI/ISA-95 is used for the exchange of information between ERP and MES sys-

tems.

For the seamless interplay and integration of the various enterprise and control systems, in par-ticular MES software applications, relying on the two standards, explicit guidelines describing the rationale when to use ISA-88 models and when to use ISA-95 models are required. The following figure illustrates the scope and focus of the two standards

172

Figure 56. Scope and Focus of ISA-88 and ISA-95

16.1.1 ANSI/ISA-88

ANSI/ISA-88 – or S88 as it is more commonly referred– defines a hierarchical recipe management and process segmentation framework that establishes the basis to separate products from the pro-cesses that make them. By separating recipes (i.e. the units that drive the process for making a product) from the control unit that drives equipment behaviour, the standard facilitates reuse and flexibility of equipment. In addition, it provides a structure for coordinating and integrating recipe-related information across the traditional ERP, MES and control domains.

As the process description is also applicable to manual process, S88 is more than a standard for software development. It covers the domain of automation control of machine and devices. In gen-eral, the associated manufacturing operations can be classified as discrete, continuous, or batch (all three types are covered by S88):

• Discrete processes that are used for the production of things, such as automobiles.One or a specific number of parts are moving from one workstation to another,whereby each location enhances the value of the thing. The identity of the thing isunique and maintained along the whole manufacturing process.

• Continuous processes describe the continuous flow of material through various pro-cessing units and equipments, such as the production of gasoline. The overall goal ofthe process is to produce a consistent product.

• Batch process is defined as “process that leads to the production of finite quantities ofmaterials by subjecting quantities of input materials to an ordered set of processingactivities over a finite period of time using one or more pieces of equipment” (AN-SI/ISA-88.01-2010)

The purpose of S88 is to establish a standard that allows to describe (and thus handle and organ-ize) the producing of products in a consistent way such that the communication of user require-ments, the integration among batch automation suppliers, or the configuration of batch controls is realized in an standardized and agreed manner.

One key aspect of S88 is its modular design that establishes the basis for the segmentation of pro-cesses. Both, recipes and processes, are made up of smaller pieces. The recipes are described by concentric parts, for instance the first level recipe is labelled as Unit Procedures. Similar, the pro-cess (or even equipment) consists of various layers of modules, i.e. a process is described as or-dered set of process stages, with process stages being described as ordered set of process opera-tions, with process operations defined as ordered set of process actions. Due to the modularity of

173

recipes and processes, the recipe component and the associated equipment control component can be described and implemented in a flexible manner.

ISA approved the standard in 1995 and the last update was submitted in 2010. In 1997, IEC adopted the original version as IEC 61512-1. The current version of the S88 consists four different parts:

• ANSI/ISA-88.01-2010 Batch Control Part 1: Models and terminology• ANSI/ISA-88.00.02-2001 Batch Control Part 2: Data structures and guidelines for

languages• ANSI/ISA-88.00.03-2003 Batch Control Part 3: General and site recipe models and

representation• ANSI/ISA-88.00.04-2006 Batch Control Part 4: Batch Production Records

And one implementation example: • ISA-TR88.00.02-2008 Machine and Unit States: An Implementation

Example of ISA-88

16.1.2 ANSI/ISA-95

ANSI/ISA-S95 - or ISA-95 as it is more commonly referred– is a standard that establishes models, terminology, as well as a consistent set of concepts suitable for defining the data exchange and communication between the business and the manufacturing world. More precisely, ISA-95 defines the interface between the enterprise’s business system, such as the systems for business planning and logistics, and their associated manufacturing control systems.

The objective of ISA-95 is to establish a commonly agreed and consistent terminology that can be used for the communication and flexible information exchange between the ERP and MES systems. By describing consistent information models as well as consistent operation models, the standards establishes the basis for the transparent and clear description of application functionality as well as for the precise specification how information is to be used.

The ISA-95 standard consists of 5 parts, from which the first three are completed: • ANSI/ISA-95.00.01-2000, Enterprise-Control System Integration Part 1: Models and

Terminology: It covers the required terminology and object models, which allows tospecify which information should be exchanged

• ANSI/ISA-95.00.02-2001, Enterprise-Control System Integration Part 2: ObjectModel Attributes: It covers the attributes for the objects defined in part 1.

• ANSI/ISA-95.00.03-2005, Enterprise-Control System Integration, Part 3: Models ofManufacturing Operations Management: The focus of this part are the functions andactivities at level 3 (Production / MES layer). It establishes guidelines for specifyingand comparing production levels of different site in an standardized manner

The remaining two parts are still in development: • ISA-95.04 Object Models & Attributes Part 4 of ISA-95: Object models and attrib-

utes for Manufacturing Operations Management: It specifies object models thatdetermine the information exchange between MES activities (defined in Part 3).

• ISA-95.05 B2M Transactions Part 5 of ISA-95: Business to manufacturing transac-tions: It specifies operations between office and production automations-systems.Part 5 can be used in combination with the object model defined in Part 1 and 2.

For modelling the semantic model in I2M-Steel, ISA-95 Part 1 and 2 are of highest relevance, as both technical specifications specify in three main categories how information should be ex-changed:

• Production Capability Information encompasses any information unit about resourcesrequired for the production for a particular point in time including information regard-

174

ing equipment, material, personal, and process segments. By describing the names, terms, statuses and quantities of which the manufacturing control system is knowl-edgeable about, the information exchange category Production Capability Information establishes a “vocabulary” for the capacity scheduling task.

• Production Definition Information establishes the information collection that is sharedbetween three kind of instances, the production rules, the bill of material, and the billof resources

• Production Information is composed of two models: a) The scheduling informationdescribing the request for production and b) the performance information specifyingthe response to the request.

Figure 57. Information exchange categories according to ANSI/ISA-S95.00.01-2000

By providing a detailed description of the suggested object models and their attributes, the two standards determine how the information crosses the boundaries between business systems and manufacturing systems. Thus, ANSI/ISA - S95 standard establishes an important basis and com-mon understanding in the development and integration of enterprise information systems at the different enterprise level. The standard facilitates and standardizes the information exchange be-tween the information systems of the various manufacturing levels. Whenever appropriate, we will reuse and rely on the standardized information exchange entities and align more specific domain ontologies accordingly.

175

16.2 PSL

The Process Specification Language (PSL) has been designed to facilitate correct and complete exchange of process information (Gruninger, 2004). It was developed by the Technical committee

ISO TC 184 for Industrial automation systems and integration, and published as ISO-Standard. It builds on first order logic and is designed in a modular way, i.e., its axioms are organized into PSL-Core and a set of extensions.

The purpose of the PSL-Core module is to establish intuitive semantic primitives adequate for de-scribing the fundamental concepts of manufacturing processes. Basically, PSL-Core16 defines four kinds of entities that establish the basis for automated reasoning about processes:

• Activities: Intuitively, activities can be considered as to be reusable behaviours withinthe domain. Activities may have multiple occurrences, or there may exist activitiesthat do not occur at all.

• Activity occurrences: An activity occurrence is associated with - at least - one uniqueactivity and begins and ends at a specific point in time.

• Timepoints: Timepoints are linearly ordered, forward in the future and backward intothe past

• Objects: An object is anything that is not a timepoint, nor an activity nor an activityoccurrence. Similar to activity occurrences, objects begin and end at a specific pointin time

In order to describe the above depicted semantics of those four concepts, the PSL-Core relies on four temporal relations, before, after, beginof and endof, as well as one structuring relation partici-pate_in. In sum, PSL-Core establishes a very generic characterization of the basic nature of pro-cesses, however – due to its simplicity - it is rather weak in terms of logical expressiveness.

In order to supplement the core concepts, PSL includes a set of extensions that introduce new ter-minologies for dedicated applications or reasoning scenarios, such as duration and ordering theory, resource theories, actor and agent theories, activity extensions, temporal and state extensions, as well as activity and ordering extensions.

In this way, further adaptation to concrete applications can be realized by simply extending the core module by the respective additional module(s) covering the concepts needed. For instance, PSL-Core could be used as foundation for specifying an ontology of cutting processes in the manu-facturing of sheet metal parts (Gruninger and Delaval, 2009). The established Cutting Process On-tology is made up of three sets of first-order axioms:

• The Shape Ontology that enables the 2D-object recognition in scenes by using mere-otopological relations (i.e. parthood and connection)

• The Shape Cutting Ontology that defines a cutting process to be any activity that cre-ates at least two new edges in some surface

• Cutting Process Taxonomy that establishes a classification of the basic cutting pro-cesses

The resulting full ontology of this application allows describing all possible ways to change a surface as the result of a cutting process, together with a taxonomy of classes of cutting processes. This application scenario demonstrates how flexible PSL-core and its extension can be aligned and com-bined to describe the underlying processes of a dedicate use case scenario. As PSL is designed as interlingua ontology for industrial domains, it can be deployed in larger integration problems that span the entire supply chain both within an enterprise and also between enterprises. Due to the

16 The axiomatization of PSL-Core can be accessed at http://www.mel.nist.gov/psl/psl-

ontology/pslcore_page.html (specified in Common Logic Interchange Format (CLIF))

176

Figure 58. Main ESIDEL cycles

Each of the cycle comprises different processes describing the different aspects of the information exchange. In the following, a more detailed description of the cycles is presented.

• Basic information cycle: is used to exchange information about trading parties be-tween new relationships or to update relevant information between existing partner-ships. It supports two processes. The first process “Party information”, allows ex-changing information such as name, address, party identifiers, contacting informationetc. between involved parties. The second process “Sales catalogue” allows the sup-plier to exchange the information about his sales catalogue such as, product identifi-cation, product specification, price information etc.

17 ESIDEL, Version 1.0, http://www.eurofer.org/eurofer/edifer/publications_ESIDEL1_0.htm

18 ESIDEL, Version 1.1, http://www.eurofer.org/eurofer/edifer/publications_ESIDEL1_1.htm

complex semantics of PSL, full fledged first order reasoners are needed for general reasoning on PSL.

In the context of the I2M-Steel project, we will reuse the idea of the modular design by establish-ing a core ontology that comes along with a set of extensions introducing various different aspects. In addition, we will reuse basic process modelling design whenever this is needed and appropriate within our use case scenarios.

16.3 ESIDEL (European Steel Exchange Language)

ESIDEL is a XML based data interchange standard developed by EDIFER committee of EUROFER, the European Steel Association. EDIFER was a group within EUROFER which was responsible for the development of solutions for Electronic Data Interchange (EDI) in the field of electronic commerce with special focus to Business to Business (B2B) relations. In 2002 based on growing acceptance of e-commerce EDIFER decided to develop a XML based standard for data interchange between the steel industry and their customers. In 2004 the EDIFER released the first version of ESIDEL17 cov-ering the business processes such as quotation, ordering, shipping, invoicing etc. In 2006 EDIFER published a follow up version of ESIDEL, 1.118, integrating some additional aspects requested from users.

In general ESIDEL provides a unified format for data interchange describing the possible infor-mation flow between the steel industry and involved partners. Further it elaborates the different business scenarios, roles and functions that the parties perform and defines a common understand-ing of the different views and terminology on trading transactions.

In the global trading cycle between customers and a supplier the ESIDEL defines six main sub cy-cles as presented in the Figure 3.

ESIDEL(European Steel Exchange

Language)

Basic information Ordering Scheduling Shipping Invoicing Payment

177

• Ordering cycle: In the ordering cycle it is foreseen to transmit the relevant infor-mation’s concerning the request for quotation, contact information’s, orders, ordersresponse and orders changes. It consists of five sub processes called: “Quotation”process, “Contract” process, “Sales list” process, “Purchase Order” process and “Or-der Status” process.

• Scheduling cycle: allows the customer to specify the delivery requirements, such astimes and quantities to be delivered. The scheduling information can be given at dif-ferent levels of details such as a forecast, firm deliveries or just in time deliveries. Thescheduling cycle defines three sub processes: “Delivery agreement” process, “Fore-cast and firm delivery” process and “Just in time delivery” process.

• Shipping cycle: defines information which is exchanged between the supplier, thecustomer, the carrier and the third parties concerning the selection, the shipment andreceiving of goods. Further it is possible to transmit certificate or quality report refer-ring to the shipped goods between the supplier and customer. The shipping cycle isdivided in three processes: “Selection of goods” process, “Dispatch & Receiving” pro-cess and “Certificate / Quality report” process.

• Invoicing cycle: defines information transmitted between supplier and the customerfor the supply of goods and services. The cycle also includes the handling of debitand credit notes and may also be superseded by the self-billing process. The pay-ment and remittance procedures are not part of the cycle, but details are containedwithin the data structure. The invoicing cycle is divided in two processes: “Traditionalinvoice” process and “Self-billing invoice” process.

• Payment cycle: covers the exchange of information between customer, supplier andthe bank about the payment of outstanding invoices. The cycle is divided in two pro-cesses the ”Payment order” process and the “Remittance advice” process definingrelevant information for the transfer of funds.

Today the ESIDEL standard has a minor relevance in the steel industry. After the release of the version 1.1 in 2006 it was intended to adapt the ESIDEL standard to the UN/CEFACT XML (United Nations Centre for Trade Facilitation and Electronic Business) standard und publish version 2 of ESIDEL. But due to the lack of interest of the key participants the further development has been stopped at the end of 2006. Anyway the ESIDEL standard provides a special steel industry oriented view on trading transactions and contains much useful information make it meaningful for the I2MSteel project.

16.4 OPC (OLE for Process Control)

OPC is a series of industrial standard specifications describing interfaces for data exchange be-tween control devices from different manufactures. The first standard (now called Data Access specification) has been developed by a task force of automation industry in cooperation with Mi-crosoft in 1996. Today the activities are managed by OPC Foundation, an organization with cur-rently 470 members, including almost all major providers of instrumentation and process control systems worldwide.

The name OPC stands for OLE for Process Control, while OLE means Object Linking and Embed-ding, a Microsoft Windows based inter-process communication mechanism. OLE provides an infra-structure for Windows based applications allowing manipulating shared objects exported by other applications. OLE itself is based on Microsoft binary-interface standard COM (Component Object Model). COM defines a language-neutral way implementing objects which can be shared between applications. An extension of the COM interface called DCOM (Distributed Component Object Mod-el) allows exchanging of such objects over the network across machine boundaries.

The aim of OPC was to develop an interoperability standard for industrial automation and other related domains allowing the cooperation of application in multi-vendor systems. Today the OPC

178

Foundation released following Specification related to the different aspects of communication in automation applications:

• OPC Data Access (DA): is the original specification for data access using the COMinterfaces. OPC DA provides capability to exchange real-time data between controldevices e.g. PLC (Programmable logic controller) and clients e.g. SCADA (Superviso-ry Control and Data Acquisition) systems. Further OPC DA defines the interface forbrowsing ability allowing discovering the available OPC Servers and exposed OPCTags.

• OPC Alarms & Events (AE): is the specification for the transmission and managementof alarm and event notifications, including process alarms, operator actions, informa-tional messages and tracking/auditing messages. Further OPC EA provides a kind offilter ability allowing clients to subscribe to the types of messages they need.

• OPC Batch: is the specification carrying the special needs of batch processes. It co-vers four basic types of information: equipment capabilities, current operating condi-tions, historical data and receipt contents according to ANSI/ISA – S88.01-1995standard.

• OPC Data eXchange: is the specification for server-server communication rather thanserver-client communication as defined in OPC DA. Besides the data exchange it co-vers also remote configuration, diagnostic and monitoring/management services.

• OPC Historical Data Access (HDA): is the specification defining the interfaces for re-trieval and storage of historical data allowing most common types of manipulationsuch as trending, automatic update on new values, calculation of aggregate data etc.

• OPC Security: is the specification defining how to control the client access in order toprotect sensitive information’s and how to prevent unauthorized modifications of pro-cess variables. The specification address security issues related to OPC DA, AE andHDA.

• OPC XML-DA: is the specification for XML (Extensible Markup Language) based ex-change of real time data using the COM interface.

• OPC Command: is the specification defining how OPC clients can identify, send andmonitor control commands which are executed on the devices represented by theserver.

• OPC Unified Architecture (UA): is a new set of specifications which replaces the for-mer specifications (OPC DA, HDA, AE) using a cross-platform web based communi-cation protocol instead of OLE/COM. This is due to the fact that Microsoft has deem-phasized COM in favour of cross-platform capable Web Services and SOA (ServiceOriented Architecture). OPC UA offer different advantages compared to the COMbased specifications e.g. such as unified access, better interoperability, platform in-dependence, communication via standard HTTP or TCP protocols even across the in-ternet, standard security model, etc…

In general the communication model used by OPC is based on Client Server architecture, where OPC Server typically communicate with the control devices like PLC using proprietary protocols or a standardized fieldbus (e.g. PROFIBUS, Modbus, CAN etc..). The process values or sensor values provided by controllers are then wrapped by the OPC Server application allowing OPC clients to access to this data using standardized software interface e.g. OPC DA. Figure 4 shows the typical setup of an OPC installation.

179

Figure 59. Typical setup of an OPC installation

17 Related Applications /Use Cases

Beside the industrial standards, there exist semantic modelling approaches, such as the Ontology for Planning and scheduling in Primary Steel Production (Dobrev et al, 2008), MASON, MAnufactur-ings‘ Semantic Ontology (Lemaignan et al., 2006) or the ADACOR ontology (Leitao, 2004) provide valuable input for developing the I2M semantic model.

17.1 Ontology for Planning and Scheduling in Primary Steel Production

The described application scenario (Dobrev et al, 2008) aims to establish a basis for the efficient integration between manufacturing execution systems for optimizing planning and scheduling tasks in the primary steelmaking. In order to achieve this aim, they propose a meta-ontology – according to ANSI / ISA-95 standard (ANSI/ISA-95.00.01-2000 / ANSI/ISA-95.00.02-2001) as well as other different domain specific ontologies that cover the related concepts of the primary steelmaking tasks.

By developing a meta-ontology that is conform to ANSI/ISA-95 standard, they establish the basis for a standard structure and framework for building domain ontologies and for integrating different domain ontologies. Basically, they establish a standard methodology for developing semantic mod-els for planning and scheduling task within the primary steel production.

The meta-ontology is built using the Web Ontology Language (OWL) and comprises 105 classes, 300 object type properties, 394 data type properties, and 187 restrictions upon properties. The meta-ontology is a hierarchical structured set of classes that describe concepts representing

• three different categories of Resources, such as Personal, Equipment and Material

• Process Segments establishing the basis for logically grouping of resources and

• four main models coveringo Product Definition Information (How to make a product?)o Production Capability Information (What is available to use?)o Production Scheduling Information (What to make and use?)o Production Performance Information (What was made and used?)

The naming, detailing and structuring of concepts is in accordance to ANSI/ISA S95 standard. For instance, the concept Material is composed of four main objects, such Material Class, Material

Sensors Sensors Sensors

Process Serverincl. OPC Server

OPC Clients

fieldbus

local network

PLCPLCPLC

Actuator

180

Definition, Material Lot and Material Sublot, which again are associated with the corresponding classes describing the properties as specified in ANSI/ISA S95 part 1 & 2.

In this way, any domain ontology that follows the same methodological development approach can be seamlessly integrated within the established meta-ontology. For instance, the Hot Strip Mill (HSM) domain ontology (Dobrev et al., 2007) – beside many other concepts – covers any concept related to material and its various characteristics, such as raw material, finished material or semi finished material, i.e. Slabs are classified as raw material concepts, Hot Rolled Coils, Hot Rolled Floor Plates, Hot Rolled Sheets and Hot Rolled Stripes are classified as finished materials and so on. Thus, simply by sharing similar meta-classes for classifying similar concept categories, the ad-hoc and seamlessly integration of various domain ontologies can be easily realized.

Summarizing, we can state that this approach is a very valuable reference for developing the framework for the generic ontology covering the various use cases in I2M-Steel project. Thus, we are planning to develop an ANSI/ISA88 and ISA95 standard conform meta-ontology that establish-es the basis for integrating the various domain ontologies of the various use cases. Due to the fact that the code of the meta-ontology is not available19 as well as the focus of the I2M-Steel use case scenarios is not limited to the scheduling and planning tasks, we will - within our future knowledge modelling approach - reuse their conceptual ideas whenever appropriate, but won’t reuse concrete fragments of their meta-ontology.

17.2 MASON – MAnufacturing Semantic Ontology

MASON is a manufacturing upper ontology that is available in OWL and addresses very general manufacturing purposes by focusing on manufacturing operations. It is based on descriptions of the manufacturing domain made by (Martin and D’Acunto 2003) where manufacturing has been de-fined as a composition of product, process and resource. Reflecting this approach MASON (cp. Le-maignan et a., 2006) defines three head concepts:

• Entities: defines the common helper concepts that allow describing the product pro-perties in an abstract way. The most important sub concepts among these entities are: Geometric entities, Raw material, Cost entities

• Operations: cover all processes related to the manufacturing such as Manufacturing operations including machining operation as well as control or assembly, Logistic operations, Human operations and Launching operations.

• Resources: describe all resources which are linked to the manufacturing process such as Machine tools, Tools, Human resources and Geographic resources.

In total MASON defines up to 270 base concepts and 50 properties binding them. The source of the MASON ontology is public available and can be downloaded from the SourceForge website20. From point of view of development of I2MSteel ontology, MASON offers some design aspects which can be reused within the project.

17.3 ADACOR – Adaptive Holonic Control architecture for distributed manufac-turing systems

ADACOR is an architecture based on a set of autonomous, intelligent and co-operative holons which represents the factory components that can be either a physical resource as machines, ro-bots, etc. or logic entities as orders. Each component in the system is represented by a holon that contains all knowledge related to the component. To describe this knowledge, ADACOR uses its own proprietary ontology, expressed in an object-oriented frame-based manner as recommended by FIPA (Foundation for Intelligent Physical Agents).

19 Our attempt to contact the authors remained unanswered.

20 MASON Ontology public source code, 2005, http://sourceforge.net/projects/mason-onto/

181

The ADACOR-concepts are expressed in terms of classes and objects. The main concepts defined by the ADACOR ontology are (cp. Borgo and Leitao 2004):

• Product: entity produced by the enterprise (it includes sub-products).• Raw-material: entity acquired outside the enterprise and used during the production process• Customer order: entity that the enterprise receives from a customer that requests some

products.• Production order: entity obtained by converting the customer and forecast orders• Work order: entity generated by the enterprise in order to describe the production of a product.• Resource: entity that can execute a certain range of jobs as long as its capacity is not

exceeded.• Operation: a job executed by one resource.• Disturbance: unexpected event, such as machine failure or delay.• Process Plan: description of a sequence of operations, including temporal constraints like

precedence of execution, for producing a product.• Property: an attribute that characterizes a resource or that a resource should satisfy to

execute an operation.The manufacturing ontology used in ADACOR defines a taxonomy of manufacturing components being suitable for analyzing and formalization of manufacturing problems. As such it offers many facets which can be adapted to the I2MSteel ontology.

18 Enterprise and Foundational Ontologies

The overall goal in knowledge modelling is to reuse and integrate related knowledge models and standards whenever possible and appropriate. Thus, beside the industrial standards and concrete ontologies in steel production domain, two other areas of related work, such as enterprise as well as foundational ontologies, are of importance for our work.

18.1 Enterprise Ontologies

There exist several approaches aiming to capture the enterprise model as a computational repre-sentation of the structure, activities, processes, information, goals, and constraints between enter-prises that will be of relevance for I2M-Steel.

For instance, the Toronto Virtual Enterprise Project (TOVE) (Fox, 1992) provides a precise specifi-cation of enterprise structure, and uses this structure to characterize process integration within the enterprise.

The Enterprise Model (Uschold et al. 1998) establishes a collection of terms and definitions relevant to enterprises that provide means to handle with changes in planning and help to increase flexibil-ity and effective communication and integration in enterprises.

18.2 Foundational Ontologies

In order to integrate the different models in the manufacturing and engineering domain in a con-sistent and coherent way, there has been further research towards general and well structured models – called foundational ontologies. The aim of foundational ontologies is to offer general ap-proaches for semantic interoperability and well founded conceptual modelling. By setting a general framework that can be tailored to any application domain, they furnish a reliable framework for information sharing and exchange in any application areas.

182

The Descriptive Ontology for Linguistic and Cognitive engineering (DOLCE)21 designed as founda-tional ontology is essentially an ontology for ontologies, and so it has no direct application to the manufacturing domain. However, DOLCE has been already applied in the manufacturing domain (Borgo and Leitao, 2007). Because of its modelling features, such as the distinction between ob-jects, the differentiation among quality categories, and its fine description of properties and capaci-ties, it provides a good framework for the manufacturing area.

The Suggested Upper Merged Ontology (SUMO)22 was created from publicly available ontologies, in an effort to form the basis for the IEEE project, the Standard Upper Ontology (SUO)23. SUO is an effort to create an all-purpose, general ontology for intercommunication, automated reasoning, and other semantic applications. Consequently both, SUMO and SUO, are not domain specific but designed with the intention that they are built upon for the desired application. We expect those fundamental approaches for modelling enterprises in the manufacturing and engineering domain to provide us orientation in connecting and relating the different relevant domain models on a higher level of abstraction.

19 References

(Borgo and Leitao 2004) Borgo, S. and Leitao, P.2004. The Role of Foundational Ontologies in Manufacturing Domain Applications. In Lecture Notes in Computer Science: On the Move to Mean-ingful Internet Systems 2004: CoopIS, DOA, and ODBASE. Meersman, Robert; Tari, Zahir; pp. 670-688, Springer Berlin Heidelberg, 2004

(Borgo and Leitao 2007) Borgo, S. and Leitao, P. 2007. Foundations for a Core Ontology of Manu-facturing. In Ontologies: A Handbook of Principles, Concepts and Applications in Information Sys-tems. Sharman, Raj; Kishore, Rajiv; Ramesh, Ram (Eds.), Integrated Series in Information Sys-tems, Vol. 14, pp. 751-776, Springer, 2007

(Dobrev et al, 2007) M. Dobrev, D. Gocheva, I. Batchkova, “Ontology development to support the knowledge based planning of hot strip mill production plant,” In proceedings of 2nd Annual SEERC Doctoral Student Conference, July 22-23, Thessaloniki, Greece, 2007.

(Dobrev et al, 2008) M. Dobrev, D.Gocheva, & I. Batchkova. An Ontological Approach for Planning and Scheduling in Primary Steel Production. In Proceedings of the 4th International IEEE Confer-ence “Intelligent Systems”, September, 2008.

(Fernández-López et al., 1999) M. Fernández-López, A. Gómez-Pérez, J. Pazos, and A. Pazos. Building a chemical ontology using methontology and the ontology design environment. IEEE Intel-ligent Systems and Their Applications, 14(1):37_46, 1999.

(Fox, 92) Fox, M.S., (1992), "The TOVE Project: A Common-sense Model of the Enterprise", Indus-trial and Engineering Applications of Artificial Intelligence and Expert Systems, Belli, F. and Rader-macher, F.J. (Eds.), LNAI 604, Berlin: Springer-Verlag, pp. 25-34.

(Gruninger, 2004) Gruninger, M. 2004. Ontology of the Process Specification Language, in Hand-book on Ontologies, Steffen Staab and Rudi Studer (Eds), Springer, pg. 575-592

(Gruninger and Deval, 2009) Gruninger, M. and Delaval, A. (2009) A First-Order Cutting Process Ontology for Sheet Metal Parts, Fifth Conference on Formal Ontology Meets Industry, Vicenza, Italy.

21 http://www.loa-cnr.it/DOLCE.html

22 http://www.ontologyportal.org/

23 http://suo.ieee.org/index.html

183

(Gruninger and Fox, 1994) M. Gruninger and M. Fox. The design and evaluation of ontologies for enterprise engineering, 1994. In N. Mars (Ed.), Working Papers EuropeanConference on Artificial Intelligence ECAI'94 Workshop on Implemented Ontologies, 1994, Amsterdam.

(Leitao et a., 2005) Leitao, P. , Colombo, A. and Restivo, F, 2005: “ADACOR, A collaborative pro-duction Automation and Control Architecture”, IEEE Intelligent Systems, 20 (1). 58-66

(Leitao, 2004) Leitao, P.: “An Agile and Adaptive Holoinic Architecture for Manufacturing Control, doctoral dissertation, Dept. Electrical Engineering and computer Engineering, University of Porto, Portugal, 2004.

(Lemaignan et a., 2006) S.Lemaignan, A.Siadat, J.Y.Dantan1, A.Semenenko, MASON: A Proposal For An Ontology Of Manufacturing Domain, Proceedings of the IEEE Workshop on Distributed Intel-ligent Systems: Collective Intelligence and Its Applications (DIS’06)

(Martin and D’Acunto 2003) P. Martin and A. D’Acunto. Design of a production system: an applica-tion of integration product-process. Int. J. Computer Integrated Manufacturing, 16(7-8):509–516, 2003.

MASON. Ontology public source code, 2005. http://sourceforge.net/projects/mason-onto.

(Uschold, 1996) M. Uschold. Building ontologies: Towards a unified methodology. In 16th Annual Conf. of the British Computer Society Specialist Group on Expert Systems, Cambridge, UK, 1996.

Uschold, 1996) M. Uschold. Building ontologies: Towards a unified methodology. In 16th Annual Conf. of the British Computer Society Specialist Group on Expert Systems, Cam-bridge, UK, 1996.

(Uschold et al., 1998) Uschold, M., King, M., Moralee, S., and Zorgios, Y. 1998. The Enterprise Ontology. Knowl. Eng. Rev. 13, 1 (Mar. 1998), 31-89. DOI=http://dx.doi.org/10.1017/S0269888998001088

HOW TO OBTAIN EU PUBLICATIONS

Free publications: • one copy:

via EU Bookshop (http://bookshop.europa.eu);

• more than one copy or posters/maps:from the European Union’s representations (http://ec.europa.eu/represent_en.htm);from the delegations in non-EU countries (http://eeas.europa.eu/delegations/index_en.htm);by contacting the Europe Direct service (http://europa.eu/europedirect/index_en.htm) orcalling 00 800 6 7 8 9 10 11 (freephone number from anywhere in the EU) (*).

(*) The information given is free, as are most calls (though some operators, phone boxes or hotels may charge you).

Priced publications: • via EU Bookshop (http://bookshop.europa.eu).

Automation and information systems in steel industry have become very complex in last years. This is caused by significantly increased demands regarding product quality, optimization of production cost, environmental issues and lead time. Furthermore larger flexibility of production is necessary to handle all challenges coming from the market like changing raw material costs or short term customer orders. Existing automation and information systems have reached their limits because they work in the same way as 30 years before. The objective of this project is now to develop a completely new paradigm for steel specific, factory- and company-wide automation and information technologies.

This new system is built on a holonic description of the production chain where software agents interact in order to solve complex problems under a negotiation protocol. The production environment is formalized using semantic technologies, ontologies, which provide to the agents a unified description of the information they have to process.

The global system is designed on a service oriented architecture which gives all the required flexibility for the agents to interact with legacy applications.

Among potential use cases that are presented, two of them are detailed for an implementation: one at Hot Strip Mill looking for alternative thickness for coils during the production and one looking for allocations for products with no allocation within the perimeter of Sagunto plant. This second use case is the one which is implemented as a prototype linked to the production information system of Sagunto plant.

EUR

28453Integrated Intelligent M

anufacturing for Steel Industries (I2MSteel)

EU

ISBN 978-92-79-65607-1

doi:10.2777/25469

KI-NA-28-453-EN

-N