A model of massively parallel call and service processing in telecommunications

12
ELSEVIER Journal of SystemsArchitecture 43 (1997) 479-490 A model of massively parallel call and service processing in telecommunications Vjekoslav SinkoviC, Ignac Lovrek * Drpurtnrmt of Trl~communiccctions. Focuity of Elrctrid Engirreerina und Cotnputin~. Uniorr.wy of‘Zuci#rrh. Unsku 3, HR-10000 Zugreh, Croutiu Abstract In this paper telecommunication networks are considered as parallel call and service processing systems where the parallelism can be expressed as a simultaneous handling of calls and services and as an inherent parallelism within the call and service processes. A model of massively processing proposed in this paper combines computational and stochastic characteristics of these processes. A granularity based on a hierarchy call - processing request - elementary task determines their computational properties. Stochastics is described by using probability functions for a request arrival characteristics and a number of elementary tasks per request. A response time, as a basic parameter describing the model, is evaluated using a simulation method which includes a genetic algorithm. Kryrr~ordsc Telecommunications; Parallel processing; Response time analysrs; Genetrc algorithm 1. Introduction Telecommunication networks can be considered as parallel call and service processing systems. The call is a generic term related to the establishment, utilisation and release of a connection between a calling party and the called party for the purpose of exchanging information. The call represents an asso- . Corresponding author. Email: [email protected]. ciation between two or more users, or between a user and a network that is established by use of network capabilities. The service is that which is offered by a network to its users in order to satisfy a specific telecommunication requirement. Even when reduced to a single node with user access and information switching functions, the prob lem of parallelism is very hard because of a great number of users (several ten thousand per node) and a great number of requested calls and services (several hundred thousand per hour). as well as the simultaneity and stochastic nature of call and service 1383.7621/0165-6074/97/$17.00 Copynght 0 1997 Elsevier Science B.V. All rights reserved. PI1 S1383-7621(96)00058-6

Transcript of A model of massively parallel call and service processing in telecommunications

ELSEVIER Journal of Systems Architecture 43 (1997) 479-490

A model of massively parallel call and service processing in telecommunications

Vjekoslav SinkoviC, Ignac Lovrek * Drpurtnrmt of Trl~communiccctions. Focuity of Elrctrid Engirreerina und Cotnputin~. Uniorr.wy of‘Zuci#rrh. Unsku 3, HR-10000 Zugreh,

Croutiu

Abstract

In this paper telecommunication networks are considered as parallel call and service processing systems where the parallelism can be expressed as a simultaneous handling of calls and services and as an inherent parallelism within the call and service processes. A model of massively processing proposed in this paper combines computational and stochastic characteristics of these processes. A granularity based on a hierarchy call - processing request - elementary task determines their computational properties. Stochastics is described by using probability functions for a request arrival characteristics and a number of elementary tasks per request. A response time, as a basic parameter describing the model, is evaluated using a simulation method which includes a genetic algorithm.

Kryrr~ordsc Telecommunications; Parallel processing; Response time analysrs; Genetrc algorithm

1. Introduction

Telecommunication networks can be considered as parallel call and service processing systems. The call is a generic term related to the establishment,

utilisation and release of a connection between a calling party and the called party for the purpose of

exchanging information. The call represents an asso-

. Corresponding author. Email: [email protected].

ciation between two or more users, or between a user and a network that is established by use of network

capabilities. The service is that which is offered by a network to its users in order to satisfy a specific telecommunication requirement.

Even when reduced to a single node with user

access and information switching functions, the prob lem of parallelism is very hard because of a great

number of users (several ten thousand per node) and a great number of requested calls and services (several hundred thousand per hour). as well as the simultaneity and stochastic nature of call and service

1383.7621/0165-6074/97/$17.00 Copynght 0 1997 Elsevier Science B.V. All rights reserved.

PI1 S1383-7621(96)00058-6

480 V. SinkouiL:. I. Loorek/Journal oJ’Systcm.s Architecture 43 (1997) 479-490

processes. The parallel processing in such circum- stances can be expressed as a simultaneous handling of different mutually independent or dependent calls and services and as an inherent parallelism within the call and services. The first aspect was widely used as a development basis for an entire range of processors and telecommunication programming lan- guages. The potential of the second one has become significant recently. The number of services and their complexity are growing up, especially in intelligent and broadband networks, and new solutions for pro- cessing as well as programming have to be evalu- ated. An approach based on a fine granularity of call and service processes is discussed in the paper. The central idea is to combine stochastic and computa- tional characteristics of these processes to derive a model of the massively parallel processing [l 11. A simulation method is proposed in order to analyse the model and a response time as its basic parameter. The simulation is based on a random generation of processing requests with random structures with re- spect to the number of elementary tasks and their serial/parallel ordering. The genetic algorithm is used to determine a finishing time for the generated requests.

The paper is organised as follows. Section 2 describes a basic call and service processing model. A granular decomposition of calls and services is addressed in Section 3, a model of parallel process- ing follows in Section 4. An analysis of massively parallel processing of calls and services is given in Section 5. Experiments and simulation results de- scribing the genetic algorithm and the whole method are presented in Section 6, while the concluding remarks appear in Section 7.

2. Basic model of call and service processing

The basic model of call and service processing in telecommunications includes a processing system and its environment consisting of users and network re-

sources. Users initiate calls and services. Each call and service causes a number of processing requests. The processing system dedicated to call and service handling acts in discrete time intervals, starting with a detection of input data from the environment, processing these data, and completing a job by send- ing back output data which are used for controlling call and service processes. The whole system is defined as reactive.

The call can be represented by an ordered se- quence of processing requests R, occurring in ran- dom intervals. The first request R, initiates call, further requests start different communication and information operations called call phases. For in- stance, a basic call model for an intelligent network comprises 10 phases in call destined for a successful originating call and 7 phases destined for the term- nating one [4]. The call and service decomposition on phases depends on the interaction with the envi- ronment which occurs always between two phases. The mean number of requests is influenced by net- work quality and an implemented set of services. On this level of abstraction, unsuccessful calls shorten the basic model because some phases are skipped. Services may shorten the call or modify it by replac- ing some phases with other ones, or enlarge it with inserted phases - so the number of requests varies from case to case. In order to analyse call and services, requests will be decomposed to elementary tasks (ET). Each request Ri consists of ni elemen- tary tasks. A request can finish with less than nj elementary tasks, because the degraded task pattern may be executed for the unsuccessful or irregular phase.

The processing system will be observed in dis- crete time intervals expressed in Ar units. All ele- mentary tasks are supposed to have the same pro- cessing time, Ar. When handling a request a single processor will process an ET in every At. The elementary queuing model of a single processor sys- tem with a processor queue connected in a feedback loop is shown in Fig. 1 [ 101.

V. Sinkwit. 1. Lovrrk /Journul cfSystems Archmwure 43 (1997) 479-490 481

RegUeSt arrival

Request duratmn

---A n. G(n) ‘-

Feedback processor Queue

t,=n )’

Fig. I. Queuing model of a single processor system

The stochastic nature of the call and service re- quest flow is defined by using two probability func- tions, F(t) and G(n). A probability function F(r), where t is a discrete time variable, defines request interarrival time with an arrival intensity h (requests per 4t). A number of ETs per request is treated as a random variable and described with a probability function G(n) having a mean value Ti. The mean flow of elementary tasks at the input of the process- ing system equals A X TiET/4 t. The processing is organised in the way that after an ET is completed a request is turned back into the queue where it is competing with newly arrived requests for new ht needed for the next ET. The processing of the re- quest composed of n ETs will be finished in a hr with the probability G(n); it is returned into the queue with the probability I - G(n).

A parameter describing processing systems per-

formance is the mean response time, T,, defined as follows:

T, = T, + T,, , (1) where T, and Tw represent a mean processing time and a mean waiting time, respectively. All parame- ters are measured in 4r units. The mean request processing time equals Z. The request waiting time, Tw, is the time spent in the queue before all ETs are executed.

Looking up to a single source, the request flow can be treated as a Poisson on-off process with an interarrival time r, following exponential distribu- tion with a mean value T, and arrival intensity A = I/T,. In that way F(r) is defined. For G(n) the Erlangian distribution of the number of ETs per request with a mean value il is supposed. This distribution offers a parameter additional to the mean

482 V. SinkourC. 1. Lourek/Journul oJ’Sy~ten~.s Architecturr 43 (1997) 479-490

value for ET number definition - a distribution stage, k, useful for the description of different pro- files and various usage of services. An adaptation to the real request structure can be done by defining probability density function parameters for the re- quest interarrival time and parameters of the Erlan- gian distribution for a number of elementary tasks per request. Let us suppose a situation where a new service is implemented in a network. The service complexity may reflect twofold: with more intensive user-system interaction causing more processing re- quests and/or with more complex processing re- quirements introducing more ETs per request. To describe changes, the probability of requests with a greater number of ETs must be increased as well as the mean value of ETs. The solution is to switch to the Erlangian distribution of the lower stage.

The probability for the number of requests in j-th Ar, r,(j), is defined as follows:

The Erlangian distribution for the number of ETs per request, n,, equals:

&-’ (kp)’ G(n) = P( nj 5 rt} = 1 - eekSn c -n’ (3)

i=o i! ’

where p = l/71. The expressions (2) and (3) are used

d’) 25t - .

Fig. 2. Example of a request flow simulation

1od Fig. 3. Request flow characteristics derived with a composite

Poisson process.

for the request flow simulation. Diagrams in Fig. 2 show the results for approximately 2000 samples with the request arrival intensity h = 0.1 using Erlan- gian distribution E, (k = 5) and n = 10 for the num- ber of ETs. Probability density functions for the interarrival time (IAT) and the number of ETs (n) are included.

With more sources, N. a composite Poisson pro- cess is obtained by the mean value which is the sum of all individual mean values. For a big N the process is Gauss-like. Fig. 3 represents such an example. The diagram shows a probability of a number of requests in a At related to the request arrival intensity. To obtain the mean flow of elemen- tary tasks in a Ar, the number of requests has to be multiplied by Ti.

3. Granularity of call and service processes

This section introduces a hierarchy: call - call phase - elementary task. Processing requests start phases composed of elementary tasks. The elemen- tary task which represents the basic call and service process grain is defined as a specified set of instruc- tions with a total ordering relation and only one

V. Sinkou~~. I. Loorrk/ Journd rfSysrem.s Architecture 43 (1997) 479-490 483

execution sequence. When discussing the granular decomposition of call and service processes, the following features have to be taken into considera- tion:

. calls and services are real-time processes with response time constraints and real-time dependen- cies;

. calls and services are distributed processes be- cause telecommunication systems are built from communicating nodes operating autonomously and being coupled weakly;

* call and services are processes with a potentially large number of parallel activities.

A new model leading to the massively parallel call and service processing must incorporate both, the request level and the elementary task level paral- lel processing. The request level parallelism could increase a call processing rate but only the elemen- tary task level parallelism may decrease the request processing time. The problems arising with this model are:

. request structure expressed by means of a serial/parallel ordering of elementary tasks;

. minimum request processing time and corre- sponding minimum parallelism of elementary tasks;

* different levels of parallelism within a request and in subsequent requests and how to exploit them in order to use the processing system more efficiently.

There are three parameters describing call and service granularity at the elementary task level:

n, number of ETs in the request R,; pi maximum of ETs which can be processed in

parallel; m, number of ET partitions S, which must be pro-

cessed serially.

n,, pi and m, result from the fine decomposition of call and service processes. The request R, is de- scribed with a triple:

R, + (n,,p;,m,). (4)

The internal structure of the request is defined by the number of ETs, sk, in each S, partition as follows:

R; -+ ( s,,s2 ,..., sk ,.... sm,) (5)

where 1 I sk 5 p,. The index k defines the height of the partition. The serial/parallel ordering of ETs defines a task pattern for a specific request. If mi = n;, and pi = 1 elementary tasks are processed seri- ally; for m, = 1 and p1 = n, all tasks could be exe- cuted simultaneously. All other cases define a partial parallelism for a given set of tasks. The total request processing time, mi X 4r, is determined by the in- herent parallelism and it can not be decreased. Note that the task pattern graphic representation corre- sponds to a Gantt chart. The following task pattern example is shown in Fig. 4:

R, 4 (n,,p,,m,) = (10,3,6),

R,-,(s,,s,,s,,s,,s,.s,)=(1.3,3,1,1,1).

The problem of determining the optimum task pattern, i.e. the task pattern with the minimum p,, is equivalent to the problem of determining the mini- mum number of processors required to process the program in the shortest possible time. This pattern can be derived by using methods as those referenced in [.5,9].

Basically, smaller elementary tasks lead to the higher parallelism, but the proper boundary depends on the way how a concurrent programming language

m,= 6

“lo*, I p,=3

s~s2s3s4s5s6

Fig. 4. Task pattern.

484 V. SmkouiL:. 1. Lourek/Journul of Systems Architecture 43 (1997) 479-490

supports the creation and destruction of parallel ac- tivities as well as interprocess communication. It is well known that telecommunication applications like telephony are programmed as if they were composed of a large number of small agents. Programming reasons are against unconstrained parallel processes, i.e. parallel activities at the instruction level. The fact is that the elementary task should be considered as an individual process which consists of several se- quential activities. It describes a sequential behaviour but it is allowed to be executed concurrently [l-3]. The complexity of software modules in widely used non-concurrent languages offers arguments for pro- ducing the optimum task patterns with heuristics.

4. Model of parallel processing

The basic mode1 discussed in Section 2 will be transformed into a multiprocessor system with P processing elements (PE), each of them being sup- plied with an input queue to amortise stochastic variations of the request flow. The processing schemes with application specific and universal pro- cessing elements will be considered.

4.1. Case A - Processing scheme with application specific processing elements

The processing scheme consists of processing ele- ments which are capable to process a specific ET in a Ar. An elementary parallel structure consisting of as many weakly coupled PEs as ETs for the whole call and all services is defined. The elementary parallel structure is represented by means of a PE chain which corresponds to the chain of task pat- terns, as well as elementary tasks, for the defined parallel model of the call and services (Fig. 5). A shadowed part in Fig. 5 corresponds to the task pattern shown in Fig. 4.

NoPE=n

m

Fig. 5. Elementary parallel structure.

The total number of PEs, n, length of the proces- sor chain, m, and the maximum parallelism, p, are defined as follows:

m = Cm,.

p= max( p,). This scheme corresponds to the perfect schedule of serially connected requests represented by optimum task patterns. The elementary parallel structure can serve m calls in different phases if they differ strictly for 1 ET which is equivalent to a static data flow principle. A mean parallelism of ETs, U, and a density of ETs, d, are defined as follows:

n u= -, (6) 171

d=n=f mXp P’

(7)

Let us consider, for example a set of call and ser- vices decomposed to R = 50 phases (requests) and n = 500 ETs. Supposing u = 1 .l and p = 4 as a degree of concurrency, the resulting elementary par- allel structure has the chain length m = 455 and the density d = 0.275. The mean number of elementary

V. Sinkouri. I. Lourek/Journal oj Systems Architecture 43 (1997) 479-490 485

tasks per request is 71 = 10 and the mean relative request processing time is m = 9.1. A processing speed equals: 5.5 request/At.

4.2. Case B - Processing scheme with uniform processing elements

The processing scheme consists of processing ele- ments capable to process any ET. Elementary scheme is made of p PEs executing 1 to p ETs simultane- ously. The resulting scheduling for the sequence of task patterns will have the same “shape” as in Case A (read m, as the time interval x and pY as the processor y on m - p co-ordinates). For simultane- ous handling of m calls m’ = n/p elementary schemes are needed. The number of processors is the same as in Case A.

In both cases the massively parallel structures consist of multiple elementary structures. The multi- plicity factor is the result of a dimensioning proce- dure. The mutual interconnecting of elementary structures makes the usage of PEs more effective.

5. Analysis of massively parallel processing of calls and services

The analysis is restricted to the problem of the response time of the parallel call and service process- ing. The requests arriving from the environment randomly within a At interval are sent to the execu- tion in equidistant At intervals accompanied with their task patterns. The request flow characteristics and task pattern shapes, as well as their compatibility with the processor scheme will influence the overall functionality. The compatibility can be expressed by the relation between the number of elementary tasks ready for execution in an interval and the number of processing elements available to handle them. The processing schemes described in Section 4 are equiv- alent with respect to this criterion.

r, (1) ‘I (2) r,(3) r, (4) rl 0) I I I I I I I I I I I I I,

1234567 6 9 10 1 , I

e-1 - e(l)=4 e (2)= 2 e (3)= 0 e 0)

Fig. 6. Simulation time scale.

The processing time, 7;, and the waiting time, T,,,, have to be evaluated in order to obtain the response time, T,. The problem is similar to the problem of multiprocessor scheduling - T, corresponds to the finishing time for a given set of ETs and T, to the time needed to complete previous sets of ETs. The simulation method, which includes a genetic algo- rithm for finishing time determination is developed. The simulation method includes three steps: request

110~ generation, finishing time determination and response time calculation which are repeated as many times as load samples are defined.

5.1. Requestjlow generation

A time scale with At intervals is assumed. Inter- vals differ with respect to the number of requests - full intervals have at least one request, empty inter- vals are with no request (Fig. 6). The set of requests generated in the full interval represents a load sam- ple. The number of requests in the sample is a random variable, t-,(j). Subsequent empty intervals form the empty period whose length denoted with e(j) correspond to the probability value P( ri = 0).

The request flow generation follows distributions (2) and (3). Task patterns different with respect to Eq. (4) and Eq. (5) are generated randomly with n,, m, and pc as upper limits for ni, mj and pi. The value of arrival intensity A as well as ns, m, and p, are adjustable simulation parameters. By summaris- ing task partitions on equal positions, a composite task pattern R, is obtained:

R ‘ + ( sc,,scz ,..., sck ,..., SC,,),

R,. -+ (n, rp,,m,.)l

486 V. SinkooiC. I. Lourek /Journal o/‘Sy.am~ Architecture 43 II 997) 479-490

where

.r= I

r(j)

n, = C nj, i= I

pc = max( sck ) ,

171,=max(m,).

One should notice that the composite task pattern introduces a dependence in the set of task patterns. An ET from the partition SC, can not be started before finishing all ETs from SC,- ,, including ETs from different task patterns which are by definition independent.

The regularity of task patterns or their comple- mentary structures can be used to improve capabili- ties of the parallel structures. Let us suppose two task patterns with the same structure and regular high (s,)--low (s,) parallelism periodicity [8]. The number of ETs to be executed in parallel is pc = 2s, because partitions with high parallelism from both patterns coincide. This number will be reduced to s = s,, + s, when delaying one task pattern ETs for 1 Ar because partitions with high and low parallelism from different task patterns will be combined in the same interval.

5.2. Finishing time determination

A genetic algorithm is applied for finishing time determination. Genetic algorithms are random search methods based on natural selection principle [6]. They comprise a population representing a potential solution and genetic operations over the population. Genetic algorithms operate in the way that each element of the population called string is evaluated, best strings selected and new population created by using genetic operators. The method proposed in this paper follows the basic principles of a genetic algo-

rithm for multiprocessor scheduling described in [7]. It includes the following steps:

I. Definition. 1.1. The definition of the processor scheme - the

number of processors. P, must be defined. 1.2. The definition of the fitness function - the

finishing time is used for evaluating different strings in the population.

1.3. The determination of the probabilities con- trolling genetic operations - probabilities for mutation and crossover operations must be defined.

2. Initial population generation. ETs from the com- posite task pattern are allocated randomly to pro- cessors with precedence relations preserved. The allocation is represented with lists of ETs, one for each processor. A single allocation corresponds to a string as a member of an initial population (Fig. 7). The population size determining the number of strings must be defined.

3. Fitness function evaluation. The evaluation is based on the difference between the highest calcu- lated finishing time and the finishing time for the string under consideration.

4. Genetic operations. 4.1. The reproduction - a selection of strings in

the current generation is based on the fitness function. Strings with greater fitness value have a greater possibility to enter the new

/ ‘8 ’ -2 ik

“0 Of tasks A t population size

(gene)

Fig. 7. String population.

P

V. Smkooic’. 1. Lourek/Journal nj’Sy.srcvn.~ Archirrcrurr 43 (1997) 479-490 487

generation. A roulette wheel where each string occupies a slot size proportional to its fitness value is used. Additionally, the best strings in the current generation are included in the next one.

4.2. The crossover - an exchange of portions of two strings to obtain the new one is done with a predefined probability. The ETs next to the crossover site must be in different partitions, i.e. with different heights.

4.3. The mutation - exchange of ETs of two strings from the same partition to obtain the new string is done with a predefined proba- bility. The height of the partition to be mu- tated is chosen randomly. The mutation dif- fers from the one described in [7] because more ETs, and not only one, may migrate from one processor to another. Such redistri- bution of ETs between two processors may improve the population and establish the load balance because all elementary tasks have the same duration.

5. Convergence testing. If algorithm is not converg- ing repeat steps 2-4 and perform genetic opera- tions on a new generation. Otherwise, the result- ing strings and finishing time are obtained. The convergence criterion is reached when finishing time value “stabilises”. The maximum number of generations is controlled additionally.

5.3. Response time calculation

Parameters relevant for the analysis of a given sample j are:

finishing time: t,(j), (8)

unfinished work: t,(j) = r,(j) - e(j) - 1, (9) j- I

waiting time: t,(j) = C r,(X). (‘0) x= I

response time: ty( j) = t,.(j) + t,(j). (‘1)

The mean values represent global parameters for the set of SN samples:

Tq= & F t&j), (‘2) I- 1

(‘3)

T, = & z tw( j). J= 1

(‘4)

6. Experiments

A series of experiments has been done, part of them to evaluate the genetic algorithm itself, like influences of the initial string population on the convergence, relations between the number of pro- cessors and the convergence, the convergence crite- rion itself and differences coming from crossover and mutation probabilities variation. The finishing time is taken as a comparison criterion. Some typical examples are chosen for presentation in this paper.

60 .-

1 -z 2 lnltlal strings 50 . 2 -) 10 lnltlal sl”“p

3 -z 20 lnltlal strings

’ 20 40 60 80

Fig. 8. Example of the analysts with different initial populations.

488 V. Sinkouii. I. Loorrk / Jourd of Sy.swsrem,s Archirrcrure 43 f 1997) 479-490

An example with different initial population sizes is shown in Fig. 8. Simulation parameters include request arrival intensity, the mean number of ETs and maximum parallelism for task patterns, number of processors, crossover and mutation probabilities. Load samples are described with generated task pat- terns, the total number of ETs (sum ETsl, composite task pattern (ETs per time slot), height and optimal finishing time.

The analysis is done for three different initial populations (2, 10 and 20 strings) for a fixed number of generations (GEN = 100). The best scheduling and corresponding finishing time (FT) are simulation results. Note that the optimum is not reached with the defined number of generations for the largest population size.

An example showing the influence of a number of processors on the convergence is presented in Fig. 9. The increase of finishing time with a higher proces- sor number due to insufficient number of generations is evident. The number of processors influences the convergence; more generations are needed to ap- proach the optimum with a larger number of proces- sors. The last example shows that the number of generations can not be a reliable convergence crite- rion (Fig. 101.

For the purpose under consideration, it is decided to work with a small population size (from the one comparable to the number of processors to the one several times larger), large crossover probability (0.7-0.91, medium mutation probability (0.15-0.25)

FT 110

105

100

95

i:.,

90

a5

PROC 4 10 12

Fig. 9. Example of the analysis with different number of proces-

SOTS.

-us. 10. 13. 10. 5, 5 3 2). is. 5. 51. I (12. (12, 10. 11.9.4 5.3. 3 ), 9, 13.9, 4, 5.3,0), 1). II (1, (4.2. 2. 3. 4. 4. 5. 2. 4.6). 5.6.

- : GEN 20 40 60 80 100

4

Fig. IO. Example of the analysis with different number of genera-

tions.

and several hundred generations. This is verified experimentally to be satisfactory from the point of convergence and analysis time.

The second group of experiments is dedicated to the main goal - the analysis of calls and services processing. A general presumption is that the number of processors can not be lower than the maximum parallelism defined for call and service processes, i.e. all inherent parallelism will be fully exploited. Different situations have to be analysed, from regular to heavy request flow, including request bursts. For instance, let us suppose a regular request flow with several requests in a Ar (r(j) = 3-51, low mean number of ETs per request (ii = lo), low maximum parallelism ( ps = 5) and task pattern density I 0.4. It can be expected that just few empty intervals after each full interval would suffice to avoid the overload of the processing system. Request bursts or heavy request flow can be described by modifying different simulation parameters, one or more in the same analysis: with more requests (r(j) = 10-201, with more ETs per request (ii = 201, with higher maxi- mum parallelism ( pJ = 10) or with task pattern den-

V. SinkooiC. I. Lourek/Journal of Systems Archirecrure 43 f 1997) 479-490 489

sity > 0.4. In such cases the processing system will be overloaded if the number of subsequent empty intervals is not similar to the finishing time.

In order to cover different situations and to allow variations of different parameters the decision was made to introduce a new system parameter - a processor load or processor utilisation. It is defined as follows:

R*=E P’ (15)

When considering RO as a constant value different

10

8

6

4

2

60.

tq PROCESSOR mm 0 525

150.

125. 100.

751 A

150 125 100

m

75 50 25

20 40 60 80 10&3dPLE No

Fig. I I Response time per sample for different processor loads.

Tq

m

Fig. 12. Mean response time for different processor loads.

situations with respect to request arrival intensity, the mean number of ETs per request and available pro- cessing elements will produce the same response time. In the same way, different processor loads can be obtained by changing one, two or all three param- eters. The results obtained by simulation with four different processor loads are shown in Fig. 11. The response time per sample grows up when the proces- sor load increases. The system can not recover and the control becomes unstable at high processor loads. Fig. 12 shows the relation between the mean re- sponse time and the processor load.

The simulation method is programmed using MathematicaTY on Macintosh and SUNSparc com- puters.

7. Conclusion

In this paper the model suitable for the analysis of massively parallel call and service processing in telecommunications has been considered. The inher- ent parallelism of these processes offers a possibility to improve processing efficiency. The proposed model is based on a fine granular decomposition of call and services. Such decomposition influences both structural complexity (number of elementary tasks and processing elements) and efficiency (mean and maximal parallelism), as well as the performance of the system. The approach combines the stochastic nature of processes with their computational proper- ties. The method for the determination of the re- sponse time of the model has been proposed. It is

490 V. Sinkouic. I. Lourek/Journul ofSy.stem.s Architecture 43 fIYY7) 479-490

simulation based and it includes random generation of processing requests, a genetic algorithm for pro- cessing time determination and response time calcu- lation. The proposed method was tested by different simulation parameters describing request flow (re- quest arrival intensity, mean number of elementary tasks per request, task pattern structure> and the genetic algorithm (number of processing elements, initial population size, genetic operator probabilities).

References

[II

01

Dl

[41

[51

id

[71

[81

J. Armstrong and R. Virding, Erlang - An experimental telephony programming language, Proceedings 12th Internu-

tionul Switching Symposium. Vd. 3 (Stockholm, 1990) 433

48. J. Armstrong, R. Virding and M. Williams, Concurrent

I’rogrumming in ERLANG (Prentice-Hall, Englewood Cliffs,

19931. B. Dacker, Erlang - A new programming language. Ericsson

Reuiew 70(2) ( 1993). J.M. Duran and J. Wiser, International standards for intelli- gent networks, IEEE Cnmmunicutions Muga,-rnr 30(2)

(February 19921 34-42. E.B. Femandez and B. Bussell. Bounds on the number of processors and time for multiprocessor optimal schedules,

IEEE Trunstrctions on Computers C-22(8) (August I9731

745-75 I. D.E. Goldberg. Genetic Al~orithrru m Sruwh, Optimi~cction.

cmd Mucliine Lrurning (Addison-Wesley, Reading, 1989). E.S.H. Hou, N. Ansari and H. Ren, A genetic algorithm for

multiprocessor scheduling, IEEE 7‘runsuction.s on Pardlel

and Dt.strihuted System 5(2) (February 19941 1 I3- 120. J.H. Huang and L. Kleinrock, Performance evaluation of dynamic sharing of processors in two-stage parallel process-

ing system, IEEE Truruuctions on Purullel und Distributed

Systems 43) (March 1993) 3066317.

[9] C.V. Ramamoorthy, K.M. Chandy and M.J. Gonzalez Jr., Optimal scheduling strategies in a multiprocessor system, IEEE Trunsuctioru on Computers C-21(2) (February 1972)

137- 146. [IO] V. Smkovic. Irtjbrmution Networks (Skolska knjiga, Zagreb,

1994) (in Croatian).

[ 1 I] V. Sinkovic and I. Lovrek, An approach to massively parallel call and service processing in telecommunications, Procrrd-

in,q.s MPCS 94 Conference on Mussioely Porullel Computing

Systems - The Challen,qe.s of Genrrul Purposr und Special

Purpow Computing (tschia, 19941 533-537.

Vjekoslav Sinkovit received the BSc. degree in Electrical Engineering, M.Sc. and Ph.D. degrees from the University of Zagreb, Croatie in 1962, 1966 and 1968, respectively. He is worlcing as a professor at the Department of Telecom- munications. Facultv of Electrical Engi- neerino and Comoutine. Universitv -of Zagreb. He has publishyd over lad pa-

L pers and two books. For the last book entitled “Information network” he re- ceived the award of the Croatian Academv of Sciences and Arts for the

best book in technical sciences -in 1994. His current research interests include performance evaluation of high-speed networks, parallel and distributed systems. Dr. SinkoviC is a chairman of the Publishing Board of ITA - Information, Telecommunications, Automata Journal. He is a member of IEEE and the Croatian Telecommunications Society.

Committee of ConTE 1992 to 1994. He is a Group on Petri Nets ety.

.L ( me

and

lgnac Lovrek is a professor at the De- partment of Telecommunications, Fac- ulty of Electrical Engineering at the University of Zagreb, Croatia. He stud- ied Electrical Engineering at the Univer- sity of Zagreb, where he received his B.Sc., MSc. and Ph.D. degrees in 1970. 1973 and 1981, respectively. His re- search interests include call and service modelling and processing, Petri nets, concurrent programming and telecom- munication system architecture. Dr. Lovrek was a chairman of the Program

:onference on Telecommunications from mber of IEEE, ACM. GI-Special Interest the Croatian Telecommunications Soc-