A Model based on Linguistic 2-tuples for Dealing with Multigranularity Hierarchical Linguistic...

15
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 1 A Model Based on Linguistic 2-tuples for Dealing with Heterogeneous Relationship among Attributes in Multi-expert Decision Making Bapi Dutta, Student Member, IEEE, Debashree Guha, and Radko Mesiar Abstract—Classical Bonferroni mean (BM), defined by Bonfer- roni in 1950, assumes homogeneous relation among the attributes, i.e., each attribute Ai is related to the rest of the attributes A \{Ai }, where A = {A1,A2, ..., An} denotes the attribute set. In this paper, we emphasize the importance of having an aggregation operator, which we will refer to as the extended Bonferroni mean (EBM) operator to capture heterogeneous inter- relationship among the attributes. We provide an interpretation of “heterogeneous inter-relationship” by assuming that some of the attributes, denoted as Ai , are related to a subset Bi of the set A \{Ai } and others have no relation with the remaining attributes. We provide an interpretation of this operator as computing different aggregated values for a given set of inputs as interrelationship pattern is changed. We also investigate the behaviour of the proposed EBM aggregation operator. Further to investigate, a multi-attribute group decision making (MAGDM) problem with linguistic information we analyze EBM operator in linguistic 2-tuple environment and develop three new linguis- tic aggregation operators: 2-tuple linguistic EBM (2TLEBM), weighted 2-tuple linguistic EBM (W2TLEBM) and linguistic weighted 2-tuple linguistic EBM (LW-2TLEBM). A concept of linguistic similarity measure of 2-tuple linguistic information is introduced. Subsequently, a MAGDM technique is developed in which the attributes’ weights are in the form of 2-tuple linguistic information and experts’ weights information are completely un- known. Finally, a practical example is presented to demonstrate the applicability of our results. Index Terms—Linguistic 2-tuple, extended Bonferroni mean (EBM), 2-tuple linguistic extended Bonferroni mean (2TLEBM), multi-attribute group decision making (MAGDM) I. I NTRODUCTION M ULTI-ATTRIBUTE group decision making (MAGDM) problem has been one of the major research fields in the decision sciences over the last few decades [1]–[13]. It is characterized by a set of experts whose aim is to find the most suitable alternative among the finite set of alternatives assessed on a finite set of attributes, both qualitative and Manuscript received April 15, 2014; revised July 23, 2014 and September 9, 2014; accepted October 29, 2014. The first author gratefully acknowledges the financial support provided by the Council of Scientific and Industrial Research, New Delhi, India (Award No. 09/1023(007)/2011-EMR-I). The third author was supported by the grant APVV-0073-10 and by the European Regional Development Fund in the IT4Innovations Centre of Excellence project (CZ.1.05/1.1.00/02.0070). Bapi Dutta and Debashree Guha are with the Department of Math- ematics, Indian Institute of Technology, 800 013, Patna, India (e-mail: [email protected]; [email protected]. Radko Mesiar is with the Department of Mathematics, Slovak University of Technology in Bratislava. Radlinsk´ eho 11, 81368-Bratislava, Slovak Republic. He is also with IRAFM, University of Ostrava, 30.dubna 22, 701 03 Ostrava 1, Czech Republic (e-mail: [email protected]). quantitative. In the process of decision making, experts provide their judgments/opinions against the alternatives with respect to the attributes. However, in many situations due to lack or abundance of information, subjective estimation or vagueness, incomplete knowledge about the complex system experts’ preferences may not be assessed with both precision and certainty. Then more realistic approach is using linguistic terms instead of numerical values. In such scenario, a linguistic computational model is required to capture these linguistic terms within a mathematical framework and to facilitate the computation between linguistic terms. Several feasible and effective computational models have been suggested in lit- erature from different perspectives [14]–[17]. We will focus on linguistic computational model based on an ordinal scale. This model is also called symbolic model which makes direct computations on linguistic labels using the ordinal structure of the linguistic term set [17]–[19]. Among different symbolic computation models, 2-tuple linguistic computational model has been found to be highly useful due to its simplicity in computation and its capability to avoid information loss during the aggregation of linguistic labels. Its justification and deeper discussion can be found in [20]. Thus, over the last decade, MAGDM problems under 2-tuple linguistic environment appear to be an emerging area of research. The aim of this paper is not to cover all the range of MAGDM problems under linguistic environment, but merely to address the aggregation step. It is known that aggregation is an important step of MAGDM problem and in this step each alternative’s overall ratings are computed from alternative’s linguistic performances under different attributes by using suit- able linguistic aggregation operators. In view of this, various aggregation operators have been proposed over the last several years for aggregating 2-tuple linguistic information. We will provide a brief overview of the existing 2-tuple linguistic aggregation operators in section 2, including the motivation of our approach considering heterogeneous relations among the attributes. The paper is planned as follows. In section 3, a short survey of 2-tuple linguistic model is given, including the concept of linguistic similarity measure. In section 4, we define the proposed operator which we refer to as the extended Bonferroni mean (EBM) and we also discuss its variety in some special cases. In section 5, a 2-tuple linguistic extended Bonferroni mean (2TLEBM) is developed and its special cases are studied. This section also introduces two weighted form of 2TLEBM operators: weighted 2-tuple linguistic extended

Transcript of A Model based on Linguistic 2-tuples for Dealing with Multigranularity Hierarchical Linguistic...

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 1

A Model Based on Linguistic 2-tuples for Dealingwith Heterogeneous Relationship among Attributes

in Multi-expert Decision MakingBapi Dutta, Student Member, IEEE, Debashree Guha, and Radko Mesiar

Abstract—Classical Bonferroni mean (BM), defined by Bonfer-roni in 1950, assumes homogeneous relation among the attributes,i.e., each attribute Ai is related to the rest of the attributesA \ {Ai}, where A = {A1, A2, ..., An} denotes the attributeset. In this paper, we emphasize the importance of having anaggregation operator, which we will refer to as the extendedBonferroni mean (EBM) operator to capture heterogeneous inter-relationship among the attributes. We provide an interpretationof “heterogeneous inter-relationship” by assuming that some ofthe attributes, denoted as Ai, are related to a subset Bi of theset A \ {Ai} and others have no relation with the remainingattributes. We provide an interpretation of this operator ascomputing different aggregated values for a given set of inputsas interrelationship pattern is changed. We also investigate thebehaviour of the proposed EBM aggregation operator. Further toinvestigate, a multi-attribute group decision making (MAGDM)problem with linguistic information we analyze EBM operatorin linguistic 2-tuple environment and develop three new linguis-tic aggregation operators: 2-tuple linguistic EBM (2TLEBM),weighted 2-tuple linguistic EBM (W2TLEBM) and linguisticweighted 2-tuple linguistic EBM (LW-2TLEBM). A concept oflinguistic similarity measure of 2-tuple linguistic information isintroduced. Subsequently, a MAGDM technique is developed inwhich the attributes’ weights are in the form of 2-tuple linguisticinformation and experts’ weights information are completely un-known. Finally, a practical example is presented to demonstratethe applicability of our results.

Index Terms—Linguistic 2-tuple, extended Bonferroni mean(EBM), 2-tuple linguistic extended Bonferroni mean (2TLEBM),multi-attribute group decision making (MAGDM)

I. INTRODUCTION

MULTI-ATTRIBUTE group decision making (MAGDM)problem has been one of the major research fields in

the decision sciences over the last few decades [1]–[13]. Itis characterized by a set of experts whose aim is to find themost suitable alternative among the finite set of alternativesassessed on a finite set of attributes, both qualitative and

Manuscript received April 15, 2014; revised July 23, 2014 and September9, 2014; accepted October 29, 2014. The first author gratefully acknowledgesthe financial support provided by the Council of Scientific and IndustrialResearch, New Delhi, India (Award No. 09/1023(007)/2011-EMR-I). Thethird author was supported by the grant APVV-0073-10 and by the EuropeanRegional Development Fund in the IT4Innovations Centre of Excellenceproject (CZ.1.05/1.1.00/02.0070).

Bapi Dutta and Debashree Guha are with the Department of Math-ematics, Indian Institute of Technology, 800 013, Patna, India (e-mail:[email protected]; [email protected].

Radko Mesiar is with the Department of Mathematics, Slovak University ofTechnology in Bratislava. Radlinskeho 11, 81368-Bratislava, Slovak Republic.He is also with IRAFM, University of Ostrava, 30.dubna 22, 701 03 Ostrava1, Czech Republic (e-mail: [email protected]).

quantitative. In the process of decision making, experts providetheir judgments/opinions against the alternatives with respectto the attributes. However, in many situations due to lack orabundance of information, subjective estimation or vagueness,incomplete knowledge about the complex system experts’preferences may not be assessed with both precision andcertainty. Then more realistic approach is using linguisticterms instead of numerical values. In such scenario, a linguisticcomputational model is required to capture these linguisticterms within a mathematical framework and to facilitate thecomputation between linguistic terms. Several feasible andeffective computational models have been suggested in lit-erature from different perspectives [14]–[17]. We will focuson linguistic computational model based on an ordinal scale.This model is also called symbolic model which makes directcomputations on linguistic labels using the ordinal structure ofthe linguistic term set [17]–[19]. Among different symboliccomputation models, 2-tuple linguistic computational modelhas been found to be highly useful due to its simplicityin computation and its capability to avoid information lossduring the aggregation of linguistic labels. Its justificationand deeper discussion can be found in [20]. Thus, overthe last decade, MAGDM problems under 2-tuple linguisticenvironment appear to be an emerging area of research.

The aim of this paper is not to cover all the range ofMAGDM problems under linguistic environment, but merelyto address the aggregation step. It is known that aggregation isan important step of MAGDM problem and in this step eachalternative’s overall ratings are computed from alternative’slinguistic performances under different attributes by using suit-able linguistic aggregation operators. In view of this, variousaggregation operators have been proposed over the last severalyears for aggregating 2-tuple linguistic information. We willprovide a brief overview of the existing 2-tuple linguisticaggregation operators in section 2, including the motivationof our approach considering heterogeneous relations amongthe attributes.

The paper is planned as follows. In section 3, a shortsurvey of 2-tuple linguistic model is given, including theconcept of linguistic similarity measure. In section 4, we definethe proposed operator which we refer to as the extendedBonferroni mean (EBM) and we also discuss its variety insome special cases. In section 5, a 2-tuple linguistic extendedBonferroni mean (2TLEBM) is developed and its special casesare studied. This section also introduces two weighted formof 2TLEBM operators: weighted 2-tuple linguistic extended

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 2

Bonferroni mean (W2TLEBM) and linguistic weighted 2-tuplelinguistic extended Bonferroni mean (LW-2TLEBM) operator.An approach for solving MAGDM problem with 2-tuplelinguistic information is presented in section 6. To illustrate theworking of the proposed MAGDM technique, a site locationselection problem is presented in section 7. The results of theproblem are also compared with the other existing aggregationoperators, while section 8 concludes the discussion.

II. BRIEF OVERVIEW OF 2-TUPLE LINGUISTICAGGREGATION OPERATORS AND MOTIVATION OF OUR

APPROACH

As mentioned in introduction, several 2-tuple linguisticaggregation operators have been introduced in the literature.Based on arithmetic mean, Herrera and Martınez [17] devel-oped 2-tuple averaging operator, 2-tuple weighted averagingoperator, 2-tuple ordered weighted averaging operator. In[22], Jiang and Fan introduced 2-tuple weighted geometricoperator and 2-tuple ordered weighted geometric operator.Wei [5] presented a MAGDM method based on extended2-tuple linguistic weighted geometric operator and extended2-tuple ordered weighted geometric operator. In [6], Weiproposed three new aggregation operators: generalized 2-tuple weighted average operator, generalized 2-tuple orderedweighted average operator and induced 2-tuple generalizedordered weighted average operator. Wan [8] developed severalhybrid 2-tuple linguistic aggregation operators, such as, 2-tuple hybrid linguistic weighted average operator, extended 2-tuple hybrid linguistic weighted average operator. Park et al.[9] defined 2-tuple linguistic harmonic operator, 2-tuple lin-guistic weighted harmonic operator, 2-tuple linguistic orderedweighted harmonic operator and 2-tuple linguistic harmonichybrid operator. In [7], Wei developed some dependent 2-tuplelinguistic aggregation operators in which the associated weightonly depends on the aggregated 2-tuple linguistic information.Merigo et al. [23] introduced induced 2-tuple linguistic gen-eralized ordered weighted averaging operator. The commoncharacteristic of all the aforementioned aggregation operatorsare that they emphasize on the importance of each input, butthey cannot reflect any kind of interrelationships among theaggregated inputs.

However, in real-life decision making problems, there areinterrelationship among the attributes of MAGDM problemsand this interrelationship among the attributes has a reflec-tion in the corresponding arguments. Thus, sometimes moreconjunctions are required to take account in the aggregationprocess to model inherent connection among the aggregated ar-guments [24]. In view of this, by using Choquet integral, Yangand Chen [10] developed some new aggregation operatorsincluding 2-tuple correlated averaging operator, 2-tuple cor-related geometric operator and generalized 2-tuple correlatedaveraging operator in which correlation between aggregatedarguments are measured subjectively by expert. Based onpower average operator, Xu and Wang [11] developed threenew linguistic aggregation operators: 2-tuple linguistic poweraverage operator, 2-tuple linguistic weighted power averageoperator and 2-tuple linguistic ordered weighted power aver-age operator which allow the aggregated arguments to support

each other in the aggregation process and on this basis weightvector of the aggregated arguments is determined. Both Cho-quet integral and power average operator focus on capturingthe interrelationship among the inputs by adopting differentstrategies to generate the weights of the inputs. They directlydo not address various conjunctions among the attributes.

In this respect, Bonferroni mean (BM) focuses on directlyaggregated arguments to capture the interrelationships amongthem. BM was first introduced by Bonferroni [25] and wasgeneralized by Yager [26] and other researchers [27]–[30].Yager [26] interpreted BM as a composition of “anding” and“averaging” operator and generalized it by replacing simpleaveraging operator with other well known averaging operators,such as, ordered weighted aggregation operator and Choquetintegral [31]. Beliakov et al. [27] explored the modellingcapability of BM and depicted that it is capable of model-ing any type of mandatory requirements in the aggregationprocess. Considering interrelationship among three argumentsinstead of two, Xia et al. [28] defined weighted generalizedBM and geometric weighted generalized BM. Zhou and He[29] introduced normalized weighted form of BM. CombiningBM with geometric mean, Xia et al. [32] developed geometricBM. To aggregate various types of uncertain information, BMhave been further extended in intuitionistic fuzzy and hesitantfuzzy environments [24], [28]–[30], [32]–[34].

BM in its inherent structure assumes that each input isrelated to the rest of the inputs, i.e., while it is used for ag-gregating alternatives’ performances under different attributes,inherently it assumes that each attribute is related to the restof the attributes. However, in real-life situations such homoge-neous connection among the attributes may not always exist.There may arise situations in which some of the attributes arerelated to only a non-empty subset of the rest of the attributesand others have no relation with the remaining attributes. Thisanalysis forms the background of our present study where weshall model such kind of heterogeneous connection amongthe attributes by extending the concept of BM and introduceextended Bonferroni mean operator (EBM). We provide themathematical description of heterogeneous relation amongthe attributes in section IV. We also provide an example toillustrate the working nature of the proposed EBM operatorin comparison with the other existing aggregation operators.After introducing the concept of EBM operator, we analyzethe proposed operator in linguistic 2-tuple environment andsubsequently, a MAGDM technique is developed by assumingheterogeneous interrelationships among the attributes.

III. 2-TUPLE LINGUISTIC COMPUTATIONAL MODELS

A. Brief review of 2-tuple linguistic computational models

Let S = {l0, l1, ..., lh} be a linguistic term set with the oddcardinality h + 1. Any term li ∈ S denotes a possible valuefor linguistic variable. The following properties should holdfor the term set S [17], [21]:

• the set S should be ordered, i.e., li ≥ lj if i ≥ j• negation of any linguistic term li ∈ S : neg(li) = lj such

that j = h− i

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 3

• the maximum of any two linguistic terms li, lj ∈ S :max(li, lj) = li if li ≥ lj

• the minimum of any two linguistic terms li, lj ∈ S :min(li, lj) = li if li ≤ lj

The cardinality of linguistic term set S must be smallenough so as not to impose useless precision to the usersand it should be rich enough to allow discrimination of theperformances of each criterion in a limited number of grades[3], [35]. In fact, the psychologists recommended the use of7 ± 2 labels, less than 5 being not sufficiently informative,more than 9 being too much for a proper understanding oftheir differences [36]. In view of this, a linguistic term set, Swith seven labels can be defined as follows:S = {l0 = very low (VL), l1 = low (L), l2 = moderatelylow (ML), l3 = normal (N), l4 = moderately high (MH),l5 = high (H), l6 = very high (VH)}

Here, we have adopted 2-tuple linguistic representationmodel, which was developed by Herrera and Martınez [17],[21] based on the concept of symbolic translation. We recallthat a symbolic aggregation operations H on scale S isany non-decreasing function H : Sn → [0, h] such thatH(l0, l0, ..., l0) = 0 and H(lh, lh, ..., lh) = h. As typicalsymbolic aggregation operations, we recall the mean to themedian of label indices.

Definition 1: Let us assume that β ∈ [0, h] be the resultof symbolic aggregation operation on the indices of the labelsof linguistic term set S = {l0, l1, ..., lh}. If i = round(β)and α = β − i, be two values such that i ∈ {0, 1, ..., h} andα ∈ [−0.5, 0.5), then α is called the symbolic translation.

On the basis of symbolic translation [17], [21], the linguisticinformation is represented by means of 2-tuple (li, αi) whereli ∈ S represents the linguistic label and αi ∈ [−0.5, 0.5)denotes the symbolic translation. In view of Definition 1, thereare some important observations regarding the ranges of valuesof α and β which are summarized below in the form of remark.

Remark 1: Observe that if β ∈ [0, 0.5) then i = 0 and α =β ∈ [0, 0.5). On the other hand, if β ∈ [h− 0.5, h] then i = hand α ∈ [−0.5, 0]. Having in mind these constraints, we willstill formally consider α ∈ [−0.5, 0.5) for every β ∈ [0, h], tokeep the notation as simple as possible. This convention maybe applied also to the related scales. This observation clearlyindicates that to cover the domain of (l, α) if we consider S×[−0.5, 0.5) then it implies that we take the value of (l, α) fromthe domain S× [−0.5, 0.5), which means in fact, that domainof (l, α) is {l0} × [0, 0.5) ∪ {l1, l2, ..., lh−1} × [−0.5, 0.5) ∪{lh} × [−0.5, 0].

With the above observation at the background, the conver-sion of symbolic aggregation result into equivalent linguistic2-tuple can be done by using the following function:

Definition 2: Let S = {l0, l2, ..., lh} be linguistic term setand β ∈ [0, h] be the numerical value which is obtained fromsymbolic aggregation operation on the labels of S, then the2-tuple that conveys the equivalent information to β is givenby the following function,

∆ : [0, h]→ S × [−0.5, 0.5),

∆(β) = (li, α)

where i = round(β) is the usual round operation on label ofindex, .i.e., i is the index of the considered label closest to β,and α is the value of symbolic translation given by

α =

β − i, α ∈ [−0.5, 0.5) if i 6= 0, h

β, α ∈ [0, 0.5) if i = 0

β − h, α ∈ [−0.5, 0] if i = h

Example 1: Assume that S = {l0, l1, l2, l3, l4, l5, l6} rep-resents a linguistic term set as described above and β = 2.7is obtained from symbolic aggregation operation. Then fromDefinition 2, we can convert β = 4.2 into linguistic 2-tuple∆(2.7) = (l3,−0.3) =(Normal −0.3), which is presented inFig. 1.

Very Low Low Moderately Low Normal Moderately High High Very High

0 0.17 0.33 0.5 0.67 0.83 1

(Normal, -0.3)

Fig. 1. A 2 tuple linguistic representation

When an expert expresses his/her judgment by linguistic 2-tuple (l, α), then l denotes the nearest linguistic term in thepredefined term set S and α represents expert’s deviation fromthat linguistic term. For example, a company thinks regardinga location that the possibility of its further extension is “almostvery high”. In this scenario, location’s rating can be quantifiedby linguistic 2-tuple (l6, α), i.e., the rating of the location isnot exactly l6, however little bit less than l6, which can bemodeled by alpha.

Definition 3: Let S = {l0, l1, ..., lh} be a linguistic termset. For any linguistic 2-tuple (li, αi), its equivalent numericalvalue is obtained by the following function:

∆−1 : S × [−0.5, 0.5)→ [0, h]

∆−1(li, αi) = i+ αi = βi

where βi ∈ [0, h].Example 2: Assume that S = {l0, l1, l2, l3, l4, l5, l6} rep-

resents a linguistic term set and (l3,−0.3) be a linguistic 2-tuple. Based on Definition 3, the equivalent numerical valueof (l3,−0.3) is ∆−1(l3,−0.3) = 3 + (−0.3) = 2.7From Definition 2 and Definition 3, it is noted that anylinguistic term can be converted into a linguistic 2-tuple asfollows: l ∈ S ⇒ (l, 0).

The ordering of two linguistic 2-tuples (lm, αm) and(ln, αn) can be done according to lexicographic order asfollows:(1) If m > n then (lm, αm) > (ln, αn).(2) If m = n then

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 4

(a) (lm, αm) = (ln, αn), for αm = αn

(b) (lm, αm) > (ln, αn), for αm > αn

B. A new concept of linguistic similarity measure of 2-tuples

In literature, similarity measure between linguistic 2-tupleswas proposed in [7]. However, the existing similarity measurebasically computes the similarity degree by an exact numericalvalue. As linguistic description is easily understood and inter-pretative by human beings even when concepts are abstract,a natural question at this stage is whether it is reasonableto represent the similarity between two linguistic terms by aprecise value [37]. For example, suppose we want to developa consensus support system (CSS) for decision making sothat it can be useful for the customer who need not to bea knowledgeable person in computation of linguistic 2-tuple.In CSS it may be suggested that the experts whose opinionsare “moderately similar” are in consensus. The customer cancomfortably understand the suggestions, it is not required thathe/she should know the concept lying behind the computationmethod of linguistic term “moderately similar”. In view of this,we are of the opinion that similarity between two linguisticopinions should be expressed in linguistic manner. This factmotivates us to define the concept of linguistic similaritymeasure between two linguistic 2-tuples information. Re-garding the definition of linguistic similarity measure, fromthe axiomatic point of view, we are inspired by the ideaof Bustince et al. [38]. Namely, the similarity should besymmetric, and the similarity of two identical linguistic 2-tuples should be maximum. On the other hand, the similarityof the most distinct pairs, namely of (l0, 0) and (lh, 0), shouldbe minimal. Moreover, our similarity should possess a kindof monotonicity, namely, the similarity of linguistic 2-tuples(li, αi) and (lk, αk) cannot be larger than the similarity of(li, αi) and (lj , αj) whenever (li, αi) ≤ (lj , αj) ≤ (lk, αk).Among several possible choices, we propose the next rathergenuine definition of linguistic similarity measure.

Definition 4: Let S = {l0, l2, ..., lh} be a linguistic termset as defined in section 2. Consider a new linguistic term setS′ = {s′0, s′2, ..., s′h} whose term s′i represents the linguisticevaluation of the similarity between any two linguistic termslp and lr from S such that |p − r| = h − i. Then, linguisticsimilarity degree between any two linguistic 2-tuples (lp, αp)and (lr, αr) is given as follows:

sim((lp, αp), (lr, αr)) = ∆S′(h−|∆−1S (lp, αp)−∆−1S (lr, αr)|)(1)

Example 3: Assume that

S′ = {s′0 = perfectly dissimilar, s′1 = close to perfectlydissimilar, s′2 = moderately dissimilar, s′3 = medium similar,s′4 = moderately similar, s′5 = close to perfectly similar ,s′6 = perfectly similar}

be the linguistic term set to represent the similarity betweentwo linguistic terms of S = {l0, l1, l2, l3, l4, l5, l6}. The sim-ilarity between two linguistic 2-tuple (l3, 0.3) and (l4,−0.4)

is

sim((l3, 0.3), (l4,−0.4))

= ∆S′(6− |∆−1(l3, 0.3)−∆−1(l4,−0.4)|)= ∆S′(5.7) = (s′6,−0.3)

The above evaluation clearly indicates that linguistic similaritydegree of (l3, 0.3) and (l4,−0.4) is slightly less than “perfectlysimilar”.Based on Definition 4, we can define the similarity betweentwo collections of linguistic 2-tuples in the following way,again inspired by the idea proposed in [38].

Definition 5: Let A = ((lj1 , αj1), (lj2 , αj2), ..., (ljm , αjm))and B = ((lk1 , γk1), (lk2 , γk2), ..., (lkm , γkm)) be the twocollections of linguistic 2-tuples. Then, linguistic similaritybetween A and B is defined as follows:

sim(A,B) = ∆S′

(1

m

m∑i=1

∆−1S′ (sim((lji , αji), (lki, γki

))

)(2)

Note that, formally, similarity of linguistic 2 tuples (or collec-tions of linguistic 2-tuples) can be expressed as a real valuefrom the interval [0, h]. However, then the interpretation isout of the linguistic scope, which we believe to be preferablefor the customers, and, thus, also we prefer to deal with theproposed concept of linguistic similarity.

IV. BONFERRONI MEAN AND ITS EXTENSION

In its original form BM is a mean type aggregation operatorand it is analyzed by Yager [26]:

Definition 6: Let (a1, a2, ..., an), n ≥ 2 be a collectionof non-negative real values. Assume p ≥ 0 and q ≥ 0. Thegeneral BM of the collection (a1, a2, ..., an) is defined as

BMp,q(a1, a2, ..., an) =

(1

n(n− 1)

n∑i,j=1i 6=j

api aqj

) 1p+q

(3)

BM has been mainly used in multi-attribute decision making(MADM) to assess the alternatives’ performances under theinter-related attributes. Here, our main purpose is to analyzeBM in the context of MADM problem. In aim of this, thespecial case of BM is considered when p = q = 1 . Then (3)becomes:

BM1,1(a1, a2, ..., an) =

(1

n(n− 1)

n∑i,j=1i 6=j

aiaj

) 12

(4)

Here, aj denotes the satisfaction degree of the alternativex with respect to the attribute Aj and product operationis used to implement an “anding” of attribute satisfaction.With this assumption in background, Yager [26] has providedan interpretation of BM as computing the average of thesatisfaction of each pair of attributes Ai AND Aj . Then, (4)is transformed by Yager [26] into the following equation:

BM1,1(a1, a2, ..., an) =

(1

n

n∑i=1

ai

(1

n− 1

n∑j=1j 6=i

aj

)) 12

(5)

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 5

where 1n−1

∑nj=1j 6=i

aj denotes the average satisfaction degree

of all the attributes except Ai and ai(1

n−1∑n

j=1j 6=i

aj) models

conjunction of the satisfaction Ai with the average of thesatisfaction of the rest of the attributes.

The foregoing discussion shows that when BM is used tocompute the alternative x’s performance with respect to theattribute Ai, it assumes that each attribute Ai has relationshipwith rest of the attributes A \ {Ai}. This relationship can bedepicted as in Fig. 2. However, in real-life MADM problemsattributes may not always follow this kind of homogeneousinterrelationship patterns. There may be some attributes Ai

which are only related to a non-empty subset Bi of the setA \ {Ai} and others have no relationship with the remainingattributes. So there is a need for an aggregation operator toenable the modelling of this kind of heterogeneous relationshipamong attributes. This is the aspect that inspired us to extendthe concept of BM so that we can model interrelationshipamong the attributes in MADM scenario in a more intuitivemanner.

1A 2A iA nA

��� 1A 3A

1A 2A 1A

2A nA nA 1nA

3A 2A nA

Fig. 2. Interrelationships among attributes where Ai−Aj represents that Ai

is related to Aj

A. Extended Bonferroni mean

Let A = (a1, a2, ..., an) be a collection of inputs relatedto the attributes A = {A1, A2, ..., An}. Basically, ais arenon-negative real numbers. The attributes are heterogeneouslyrelated to each others. Based on their relationship pattern,they can be classified into two disjoint set C and D, whereeach attribute Ai from C is related to a non-empty subsetof attributes Bi ⊂ C(⊂ A) \ {Ai} (therefore, attributesfrom C will be called dependent), while each attribute Aj

from D is not related to any other attribute from A \ {Aj}(therefore, attributes from D will be called independent). LetIi denotes the set of indices of the attributes from Bi. LetI ′ denotes the indices of the attributes which are in D, andthe symbol card(I ′) means the cardinality of the set I ′. Withthese assumptions and notations in background, the EBM ofthe collection of inputs (a1, a2, ..., an) is defined as follow:

Definition 7: For anyp > 0 and q ≥ 0, the EBM aggregationoperator of dimension n is a mapping EBM : (R+)n → R+

such that

EBMp,q(a1, a2, ..., an)

=

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

api

(1

card(Ii)∑j∈Ii

aqj

)) pp+q

+card(I ′)

n

(1

card(I ′)

∑i∈I′

api

)) 1p

(6)

where the empty sum is 0 by convention, (i.e., if eithercard(I ′) = 0 this concerns the last sum, or if card(I ′) = nthis concerns the first sum), and we have made the convention00 = 0, [39]–[41] (in fact, we only need to define 0/0, itsconventional real value is not important here).

The relationship between attributes is depicted in Fig. 3,where C = {Ah1

, Ah2, ...., Ahu

} represents the set of the de-pendent attributes. The subset Bh1

= {A1h1, A2h1

, ..., At1h1}

of C denotes the set of the attributes which are related to Ah1 .

A1 A2 A3

A4 . . ... Ai

….

An

C

Dependent attributes

D

1hB

Relationship between

attributes of C

1hA 2hA

jhA uhA

��� 21hA 22hA

j1hA j2hA

u2hA j jt hA 2 2t hA u ut hA

12hA 11hA 1 1t hA

Set of attributes

A

Independent attributes

2hB jhB uhB

C

u1hA

Fig. 3. Heterogeneous interrelationships among attributes where Ai − Aj

represents that Ai is related to Aj . The attributes {Ah1, Ah2

, ..., Ahu} aredependent and each Ahi

∈ C is related to a subset Bhiof C.

We shall first describe how we interpret EBM operatorwhen modelling heterogeneous relationship among the at-tributes during the aggregation step of MADM process. Forthis purpose, we shall consider the particular case whenp = q = 1. Let (a1, a2, ..., an) be the satisfaction degree of thealternatives against the attributes {A1, A2, ..., An}. Then usingthe proposed aggregation operator (6), denoted as EBM1,1,we get as our aggregated value:

EBM1,1(a1, a2, ..., an)

=

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

ai

(1

card(Ii)

∑j∈Ii

aj

)) 12

+card(I ′)

n

(1

card(I ′)

∑i∈I′

ai

))(7)

It is important to note here that 1card(Ii)

∑j∈Ii aj indi-

cates the average satisfaction degree of the subset of theattributes Bi ⊂ C \ {Ai}, which are related to Ai. Thenthe expression ai

1|Ii|∑

j∈Ii aj models conjunction of thesatisfaction of the attributes Ai with the average satisfaction

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 6

of its inter-related attributes Bi. Then, the evaluation of(1

n−card(I′)

∑i/∈I′ ai

(1

card(Ii)

∑j∈Ii aj

)) 12

gives the satis-

faction of dependent attributes by taking average of the evalua-tions of each statement: “satisfaction of Ai and the average sat-isfaction of its inter-related attributes Bi”. On the other hand,

1card(I′)

∑i∈I′ ai indicates the total satisfaction degree of

the independent attributes. Finally, by EBM1,1(a1, a2, ..., an)we compute average satisfaction degree of heterogeneouslyrelated attributes.

Depending on the nature of the set I ′, in the present work,the proposed EBM operator is transformed into three particularcases:

1) if card(I ′) = n, i.e., if all input arguments are inde-pendent, we obtain power-root arithmetic mean as givenbelow, independent of q:

EBMp,q(a1, a2, ..., an) =

(1

n

n∑i=1

api

) 1p

2) if card(I ′) = 0 and each input argument is dependenton all other input arguments, Bonferroni mean BMp,q

is recovered as follows:

EBMp,q(a1, a2, ..., an)

=

(n− 0

n

(1

n− 0

n∑i=1

api

(1

card(Ii)

∑j∈Ii

aqj

)) pp+q

+0

) 1p

=

(1

n

n∑i=1

api

(1

n− 1

∑j=1j 6=i

aqj

)) 1p+q

=

(1

n(n− 1)

n∑i,j=1i 6=j

api aqj

) 1p+q

= BMp,q(a1, a2, ..., an) (8)

3) if card(I ′) = 0 and each input argument is dependenton some other, but not always all, input arguments, then(6) becomes:

EBMp,q(a1, a2, ..., an)

=

(1

n

n∑i=1

api

(1

card(Ii)

∑j∈Ii

aqj

)) 1p+q

(9)

From construction of EBM operator, one may realize thatthe aggregated value computed by EBM depends on theinterrelationship among the inputs. To support this idea, wepresent two examples, in which we provide the same set ofinputs with different inter-relationship pattern. Subsequently,we perform the aggregation by using both EBM and BMoperator.

Example 4: Let us consider the inputs a1 = 0.5, a2 = 0.6,a3 = 0.4 and a4 = 0.7. Assume that input a1 is related to asubset of inputs B1 = {a2, a3, a4}, a2 is related to a subsetof inputs B2 = {a1, a3}, input a3 is related to a subset of

inputs B3 = {a1, a2} and input a4 is related to a subset ofinputs B4 = {a1}, then I1 = {2, 3, 4}, I2 = {1, 3}, I3 ={1, 2} and I4 = {1}. Clearly, all the inputs are dependent, i.e.,card(I ′) = 0 and every input is dependent on some other, butnot always all, inputs. Therefore, we can utilize (9) to computethe aggregated value of the inputs. For the sake of simplicityin computation, we take p = q = 1 in (9) and obtain theaggregated value of inputs as follows:

EBM1,1(0.5, 0.6, 0.4, 0.7)

=

(1

4

(0.5

(0.6 + 0.4 + 0.7

3

)+ 0.6

(0.5 + 0.4

2

)+0.4

(0.5 + 0.6

2

)+ 0.7(0.5)

)) 12

=

(1

4(0.5× 0.57 + 0.6× 0.45 + 0.4× 0.55 + 0.7× 0.5)

) 12

= 0.527

Using BM operator ((3) with p = q = 1), the aggregatedvalue of the inputs can be calculated as follows:

BM1,1(0.5, 0.6, 0.4, 0.7)

=

(1

4

(0.5

(0.6 + 0.4 + 0.7

3

)+ 0.6

(0.5 + 0.4 + 0.7

3

)+ 0.4

(0.5 + 0.6 + 0.7

3

)+ 0.7

(0.5 + 0.6 + 0.4

3

)) 12

=

(1

4(0.5× 0.57 + 0.6× 0.53 + 0.4× 0.6 + 0.7× 0.5)

) 12

= 0.546

Example 5: Let us consider the same inputs as in Example4 with different relationship among input arguments. Supposeinput a1 is related to {a3}, input a2 is related to {a3, a4},input a3 is related to {a1, a2} and input a4 is related to{a2}. As there are no independent attributes and every inputis dependent on some other, but not always all, inputs, we canutilize (9) to aggregate the inputs as follows (for the sake ofsimplicity, we take p = q = 1 in (9)):

EBM1,1(0.5, 0.6, 0.4, 0.7)

=

(1

4

(0.5(0.4) + 0.6

(0.7 + 0.4

2

)+ 0.4

(0.5 + 0.6

2

)+ 0.7(0.6)

)) 12

=

(1

4(0.5× 0.4 + 0.6× 0.55 + 0.4× 0.55 + 0.7× 0.6)

) 12

= 0.541

The aggregated value obtains by BM operator ((3) with p =q = 1) is:

BM1,1(0.5, 0.6, 0.4, 0.7) = 0.546

In examples 4 and 5, the same set of inputs are consideredalthough the interrelationships among the input arguments aredifferent. Obviously, BM results into the same aggregated

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 7

TABLE ISATISFACTION OF LOCATIONS AGAINST MULTIPLE ATTRIBUTES

Attribute Location1 Location2

(L1) (L2)

Labour characteristic (A1) 0.3 0

Availability of raw materials (A2) 0.45 0.2

Possibility of further extension (A3) 0 0.3

Political scenario (A4) 0.5 0.4

values in both cases. Hence, the obtained result is not satisfac-tory. In this respect, there is a need to develop more generalaggregation operator. By EBM, we get different aggregatedvalues for the given inputs. This observation clearly indicatesthat our approach to develop a new aggregation operator, infact more intuitively model the interrelationship pattern amongthe inputs.

In order to explore the modeling capability of the proposedEBM operator in comparison with some well-known aggre-gation operators including BM, we provide another example.Consider Table I showing two locations with their satisfactionagainst four attributes evaluated by a multinational company.The company finds that the attributes have following rela-tionships: A1, A2, A4 are influenced by A3, i.e., A3 is inter-related with A1, A2 and A4. However, A1, A2, A4 haveno interrelationship. If company employs BM for aggregatinglocations’ total satisfaction, L1 with total satisfaction 0.3125(obtained by using Eq. (4) from L1’s individual satisfactionsunder different attributes, i.e., (0.3, 0.45, 0, 0.5) as providedin Table I) would be more suitable even though the degreeof satisfaction of inter-related attributes (A1A3, A2A3, A4A3)(i.e., (0.3× 0, 0.45× 0, 0.5× 0)) are zero. With the proposedoperator, we can enforce that the satisfaction degrees of atleast one pair among the inter-related pairs of attributes Ai andAj will be above zero for a non-zero output. Thus, by usingthe proposed operator, we can model the exact relationshipamong the attributes. Before computing the total satisfactionsof L1 and L2 by using proposed one, we can describe therelationship among the attributes more specifically as follows:A1 is related to {A3}, A2 is related to {A3}, A3 is related to{A1, A2, A4} and A4 is related to {A3}. Now by employingthe proposed EBM operator and individuals satisfactions val-ues for the locations L1 (i.e., (0.3, 0.45, 0, 0.5) as provided inTable I) and L2 (i.e., (0, 0.2, 0.3, 0.4) as provided in TableI), the total satisfactions of L1 and L2 are obtained as 0and 0.245, respectively. Therefore, by using the proposedoperator, we can capture the exact relationship among theattributes and can select L2 as more suitable location. FromTable II, we can also find that if the company was to averagesatisfaction by using arithmetic mean, L1’s zero satisfactionfor A3 would be compensated by the satisfactions for A1,A2

and A4 neglecting their interrelationships. On the other hand,if the company was to average satisfaction by using geometricmean, they would be unable to differentiate between L1 andL2 as both possess zero average satisfaction. Thus, EBM

TABLE IIOVERALL SATISFACTION OF THE ALTERNATIVES UNDER DIFFERENT

AGGREGATION OPERATORS

Aggregation Overall Satisfaction Overall Satisfaction

Operators of L1 of L2

Arithmetic Mean 0.3125 0.225

Geometric Mean 0 0

BM 0.2915 0.2082

EBM 0 0.245

shows certain advantage by capturing interrelationship amongthe aggregated arguments more intuitively than some of theexisting aggregation operators.

Now, we investigate the desirable properties of EBM oper-ator.

Theorem 1: (Idempotency) If all the input arguments areequal, i.e., ai = a for all i, then

EBMp,q(a, a, ..., a) = a (10)

Proof: From (6), we have

EBMp,q(a1, a2, ..., an)

=

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

ap(

1

card(Ii)

∑j∈Ii

aq)) p

p+q

+card(I ′)

n

(1

card(I ′)

∑i∈I′

ap)) 1

p

=

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

ap+q

) pp+q

+card(I ′)

nap) 1

p

=

(n− card(I ′)

nap +

card(I ′)

nap) 1

p

= a

Theorem 2: (Monotonocity) Let (a1, a2, ..., an) and(b1, b2, ..., bn) be two collections of input arguments such thatai ≤ bi for all i. If both the input sets have the same kind ofinterrelationship among the arguments then

EBMp,q(a1, a2, ..., an) ≤ EBMp,q(b1, b2, ..., bn) (11)

Proof: Since, ai ≤ bi for all i

⇒ 1

card(Ii)

∑j∈Ii

aqj ≤1

card(Ii)

∑j∈Ii

bqj , for all q ≥ 0

and Ii (i = 1, 2, ..., n)

⇒ api1

card(Ii)

∑j∈Ii

aqj ≤ bpi

1

card(Ii)

∑j∈Ii

bqj for all q ≥ 0

and i = 1, 2, ..., n

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 8

It follows that(1

n− card(I ′)

∑i/∈I′

api

(1

card(Ii)

∑j∈Ii

aqj

)) pp+q

≤(

1

n− card(I ′)

∑i/∈I′

bpi

(1

card(Ii)

∑j∈Ii

bqj

)) pp+q

(12)

Clearly, ∑i∈I′

apj ≤∑i∈I′

bpj for all p ≥ 0 (13)

From (12) and (13), we obtain

EBMp,q(a1, a2, ..., an) ≤ EBMp,q(b1, b2, ..., bn)

Corollary 1: (Boundedness) Let au = maxi ai and al =mini ai. Then the aggregated value by EBM satisfies:

al ≤ EBMp,q(a1, a2, ..., an) ≤ au (14)

Proof: Boundedness is a consequence of idempotency andmonotonicity, i.e., Theorems 1 and 2.

V. 2-TUPLE LINGUISTIC EXTENDED BONFERRONI MEANS

Based on (6), we define 2-tuple linguistic extended Bonfer-roni mean operator as follows:

Definition 8: Let ST be the set of all linguistic 2-tupleson the linguistic term set S and (gi, αi)(i = 1, 2, ..., n) be acollection of linguistic 2-tuples. For any p > 0 and q ≥ 0,2TLEBM aggregation operator of dimension n is a mapping2TLEBM : (ST )n → ST such that

2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

((n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

(∆−1(gi, αi))p

(1

card(Ii)

∑j∈Ii

(∆−1(gj , αj))q

)) pp+q

+card(I ′)

n

(1

card(I ′)

∑i∈I′

(∆−1(gi, αi))p

)) 1p)

= ∆

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

βpi

(1

card(Ii)∑j∈Ii

βqj

)) pp+q

+card(I ′)

n

(1

card(I ′)

∑i∈I′

βpi

)) 1p

(15)

where βi = ∆−1(gi, αi).It is important to note here that based on the cardinality ofindependent arguments, we can derive the following threeparticular cases:

1) if card(I ′) = n, i.e., the arguments are independent, weobtain 2-tuple linguistic power-root arithmetic mean asgiven below:

2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(1

n

n∑i=1

βpi

) 1p

2) if card(I ′) = 0, and each argument is dependent on allother arguments, 2TLEBM reduces to 2-tuple linguisticBonferroni mean (2TLBM):

2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(1

n(n− 1)

n∑i,j=1i 6=j

βpi β

qj

) 1p+q

(16)

3) if card(I ′) = 0 and each argument is dependent on someother, but not always all, arguments, then (15) becomes:

2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(1

n

n∑i=1

βpi

(1

card(Ii)

∑j∈Ii

βqj

)) 1p+q

(17)

In the following list, let us consider some special cases of2TLEBM operator by taking different values of the parametersp and q.

1) when q = 0, 2TLEBM operator, defined in (15) reducesto 2-tuple linguistic power arithmetic mean (the proof isgiven in Appendix A). In this case, no interrelationshipis captured between 2-tuple linguistic information.

2) when p = 1 and q = 0, 2TLEBM operator definedin (15) reduces to 2-tuple linguistic arithmetic mean asfollows:

2TLEBM1,0((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(1

n

n∑i=1

βi

)(18)

3) When p → 0 and q = 0, 2TLEBM operator definedin (15) reduces to 2-tuple linguistic geometric meanoperator as follows:

limp→0

2TLEBMp,0((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(limp→0

(1

n

n∑i=1

βpi

) 1p)

= ∆

(( n∏i=1

βi

) 1n)

(19)

A. Properties of 2TLEBM

The desirable properties of the 2TLEBM aggregation op-erator are described in the following theorems (their validityfollows from Theorem 1, Theorem 2 and Corollary 1):

Theorem 3 (Idempotency): If all the 2-tuples linguisticinformation are equal, i.e., (gi, αi) = (l, α) (i = 1, ..., n)then,

2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn)) = (l, α) (20)

Theorem 4 (Monotonicity): Consider thetwo collections of linguistic 2-tuples inputarguments ((g1, α1), (g2, α2), ..., (gn, αn)) and((g′1, α

′1), (g′2, α

′2), ..., (g′n, α

′n)) with (g′i, α

′i) ≤ (gi, αi)

for all i. We further assume that both the input sets havesame kind of interrelationship among the input arguments.

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 9

Then

2TLEBMp,q((g′1, α′1), (g′2, α

′2), ..., (g′n, α

′n))

≤ 2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn)) (21)

Theorem 5 (Boundedness): For any collection of linguistic2-tuples ((g1, α1), (g2, α2), ..., (gn, αn))

mini

(gi, αi) ≤ 2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

≤ maxi

(gi, αi) (22)

B. Weighted form of 2TLEBM

In the aforementioned analysis, only input arguments andtheir patterns of interrelationships are considered in the ag-gregation process. However, the importances of the inputsare not emphasized. In many practical applications the inputarguments may have different importances or weights. Thus, itwould be more worthwhile to define the aggregation operatorby considering the weights of the input arguments. In view ofthis, we define two weighted form of 2TLEBM.

1) Numerical weighted form of 2TLEBM: In this case, theweights of the inputs are completely known and they arerepresented by exact numerical values.

Definition 9: Let ((g1, α1), (g2, α2), ..., (gn, αn)) be a col-lection of linguistic 2-tuples. For any p > 0 and q ≥0, weighted 2-tuple linguistic extended Bonferroni mean(W2TLEBM) aggregation operator of dimension n is a map-ping W2TLEBM : (ST )n → ST such that

W2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(((1−

∑i∈I′

wi)

(1

1−∑

i∈I′ wi

∑i/∈I′

wi∆−1(gi, αi))

p

(1∑

j∈Ii wj

∑j∈Ii

wj(∆−1(gj , αj))

q

)) pp+q

+∑i∈I′

wi

(1∑

i∈I′ wi

∑i∈I′

wi(∆−1(gi, αi))

p

)) 1p)

= ∆

(((1−

∑i∈I′

wi)

(1

1−∑

i∈I′ wi

∑i/∈I′

wiβpi

(1∑

j∈Ii wj∑j∈Ii

wjβqj

)) pp+q

+∑i∈I′

wi

(1∑

i∈I′ wi

∑i∈I′

wiβpi

)) 1p)

(23)

where wi(i = 1, 2, ..., n) indicates the relative importance of(gi, αi)(i = 1, 2..., n) and satisfies the conditions: wi ≥ 0 and∑n

i=1 wi = 1.When each (gi, αi)(i = 1, 2..., n) has equal importance, i.e.,

w = ( 1n ,

1n , ...,

1n )T , then (23) becomes 2TLEBM operator

(15) (the proof is given in Appendix B). When p = 1 andq = 0 (23) reduces to weighted 2-tuple linguistic arithmeticmean (W2TLAM) (the proof is given in Appendix C).

2) Linguistic weighted form of 2TLEBM: In some situation,it is not always possible to determine the relative importances

of the inputs in exact numerical values due to lack of in-formation, time pressure and incomplete knowledge of thestudied system. For example, in MADM, it may be difficulttask for the experts to assign attributes’ weights in exactnumerical values. Then, they may fully comfortable to providethe importances of the attributes in linguistic terms. Thus,an aggregation operator is needed to compute overall ratingsof the alternatives where the importance of the attribute isexpressed in linguistic terms. In order to take account oflinguistic weights of inputs, we define linguistic weighted formof 2TLEBM as follows:

Definition 10: Let ((g1, α1), (g2, α2), ..., (gn, αn)) be a col-lection of linguistic 2-tuples and (ui, θi) (differs from (l0, 0)at least for one i) be the associated linguistic importanceof the input. Then, for any p > 0 and q ≥ 0, linguisticweighted 2-tuple linguistic extended Bonferroni mean (LW-2TLEBM) aggregation operator of dimension n is a mappingLW − 2TLEBM : (ST )n → ST such that

LW − 2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(((1−

∑i∈I′

w′i)

(1

1−∑

i∈I′ w′i

∑i/∈I′

w′iβpi

(1∑

j∈Ii w′j∑

j∈Ii

w′jβqj

)) pp+q

+∑i∈I′

w′i

(1∑

i∈I′ w′i

∑i∈I′

w′iβpi

))) 1p)

(24)

where w′i = ∆−1(ui, θi)/∑n

j=1 ∆−1(uj , θj). Clearly, w′j ≥ 0and

∑nj= w

′j = 1.

As earlier, based on the cardinality of independent arguments,we derive three particular cases as follows:

1) if card(I ′) = n, i.e., if the arguments are independent,we obtain linguistic weighted 2-tuple linguistic power-root arithmetic mean given as below:

W − 2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

( n∑i=1

w′iβpi

) 1p

2) if card(I ′) = 0, and each argument is dependent onall other arguments, LW-2TLEBM reduces to linguis-tic weighted 2-tuple linguistic Bonferroni mean (LW-2TLBM):

LW − 2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

( n∑i,j=1i 6=j

w′i1− w′i

βpi β

qj

) 1p+q

(25)

3) if card(I ′) = 0 and each argument is dependent on someother, but not always all arguments, then LW-2TLEBMbecomes:

LW − 2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

( n∑i=1

w′iβpi

(1∑

j∈Ii w′j

∑j∈Ii

w′jβqj

)) 1p+q

(26)

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 10

Moreover, it can be easily proved that numerical and linguisticweighted form of 2TLEBM satisfy idempotency, monotonicity,and boundedness properties of the aggregation operator.

VI. AN APPROACH FOR MAGDM WITH LINGUISTICASSESSMENTS

In this section, we propose a method for solving MAGDMproblem with 2-tuple linguistic information. A MAGDM prob-lem with linguistic information is depicted as follows.

There is a group of t experts {J1, J2, ..., Jt} and a set ofm alternatives X = {X1, X2, ..., Xm}. The experts’ aim is tochoose the best alternative among m alternatives dependingon n attributes A = {A1, A2, ..., An}.

We further assume that attributes are heterogeneously inter-related, i.e., some of the attributes, denoted as Ai, are relatedto a subset Bi of the set A\{Ai} and others are independent.Each expert comes from different background and, possessesdifferent level of knowledge and ability which makes theirimportances different in the decision making process. Theweights of the experts are completely unknown here.

Assume that the experts provide the weight vectorof the attributes in 2-tuple linguistic form. Let Wk =((w1k, η1k), (w2k, η2k), ..., (wnk, ηnk)) be the 2-tuple linguis-tic weight vector of the attributes given by the expert Jk(1 ≤k ≤ t), where wjk(1 ≤ j ≤ n) belongs to the predefinedlinguistic term set S and ηjk ∈ [−0.5, 0.5).

The k-th expert, Jk provides his/her rating of an alternativeXi(1 ≤ i ≤ m) with respect to the attribute Aj as a 2-tuple d(k)ij = (g

(k)ij , α

(k)ij ) where g(k)ij belongs to the predefined

linguistic term set S and α(k)ij ∈ (−0.5, 0.5]. The expert Jk’s

ratings of the alternatives is summarized in 2-tuple linguisticdecision matrix Dk = (d

(k)ij )m×n as follows:

Dk =

A1 A2 · · · An

X1 d(k)11 d

(k)12 · · · d

(k)1n

X2 d(k)21 d

(k)22 · · · d

(k)2n

......

... · · ·...

Xm d(k)m1 d

(k)m1 · · · d

(k)mn

On the basis of the above decision inputs, an algorithm forsolving MAGDM problem is presented here. The objectiveof the algorithm is two folds: (1) obtaining experts’ unknownweights by utilizing linguistic similarity measure; (2) selectingthe best alternative from the alternative set X with respect tothe attribute set A. The steps of the proposed algorithm are asfollows:

Step 1: From k-th expert’s 2-tuple linguistic decisionmatrix Dk = (d

(k)ij )m×n, alternative Xi’s over-

all performance value (r(k)i , α

(k)i ) is calculated by

utilizing LW-2TLEBM operator (24), which takethe form of (27) as shown in the beginning ofthe next page. The parameters of (27) are asfollows: β

(k)ij1

= ∆−1S (l(k)ij1, α

(k)ij1

) and w′j1k =∆−1(wj1k, ηj1k)/

∑nj=1 ∆−1(wjk, ηjk). The above

evaluation can be summarized in the matrix form as

follows:

O =J1 J2 · · · Jt

X1 (r

(1)1 , α

(1)1 ) (r

(2)1 , α

(2)1 ) · · · (r

(t)1 , α

(t)1 )

X2 (r(1)2 , α

(1)2 ) (r

(2)2 , α

(2)2 ) · · · (r

(t)2 , α

(t)2 )

......

... · · ·...

Xm (r(1)m , α

(1)m ) (r

(2)m , α

(2)m ) · · · (r

(t)m , α

(t)m )

where k-th column r(k) =((r

(k)1 , α

(k)1 ), (r

(k)2 , α

(k)2 ), ..., (r

(k)m , α

(k)m ))T of the

matrix O represents expert Jk’s overall ratings about thealternatives, X1, X2, ..., Xm.

Step 2: In this step, we construct similarity matrix. This matrixplays a key role in determining experts’ weights. Theaim is to calculate the similarity between each pair ofexperts’ overall ratings about the alternatives. For thispurpose, let us consider any two experts, Jk1 and Jk2

(1 ≤ k1, k2 ≤ t) among the t experts. From the matrixO, it can be said that the overall ratings of the alternativesprovided by the experts Jk1 and Jk2 are r(k1) =

((r(k1)1 , α

(k1)1 ), (r

(k1)2 , α

(k1)2 ), ..., (r

(k1)m , α

(k1)m ))T and

r(k2) = ((r(k2)1 , α

(k2)1 ), (r

(k2)2 , α

(k2)2 ), ..., (r

(k2)m , α

(k2)m ))T ,

respectively. Since experts provide their assessments inlinguistic terms, therefore, the similarity between theseexperts’ opinions may also be expressed in linguistic term.In view of this, similarity between the overall opinions ofJk1 and Jk2 against the alternatives can be calculated bythe proposed linguistic similarity measure (2) of linguistic2-tuples. Let smk1k2 be the linguistic similarity betweenthe opinions of the experts Jk1 and Jk2 , which can bederived as follows:

smk1k2 = sim(r(k1), r(k2))

= ∆S′

(1

m

m∑i=1

∆−1S′ (sim((r

(k1)i , α

(k1)i ), (r

(k2)i , α

(k2)i )))

)(28)

The evaluation of the linguistic similarity between eachpair of experts can be presented in the matrix form asfollows:

S =

J1 J2 · · · Jt

J1 sm11 sm12 · · · sm1t

J2 sm21 sm22 · · · sm2t

......

... · · ·...

Jt smt1 smt2 · · · smtt

Step 3: Calculate the average linguistic similarity of each expertJk (k = 1, 2, ..., t) with the rest of the experts as follows:

smk = ∆S′

(1

t− 1

t∑k1=1k1 6=k

∆−1S′ (smkk1)

)(29)

Step 4: Determine the relative importance, i.e, the weight of theeach expert Jk as follows:

λk =∆−1

S′ (smk)∑tu=1 ∆−1

S′ (smu)(30)

clearly, λk > 0 and∑t

k=1 λk = 1.Step 5: Finally, for each alternative Xi(i = 1, 2, ...,m), calcu-

late the group overall rating oi = (ri, αi)(i = 1, 2, ...,m)

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 11

(r(k)i , α

(k)i ) = LW − 2TLEBMp,q((g

(k)i1 , α

(k)i1 ), (g

(k)i2 , α

(k)i2 ), ..., (g

(k)in , α

(k)in ))

= ∆

(((1−

∑j1∈I′

w′j1k)

(1

1−∑

i∈I′ w′j1k

∑j1 /∈I′

w′j1k(β(k)ij1

)p(

1∑j2∈Ii w

′j2k

∑j2∈Ii

w′j2k(β(k)ij2

)q)) p

p+q

+∑j1∈I′

w′j1k(β(k)ij1

)p) 1

p)

(27)

by using (32) as follows:

W2TLAM((r(1)i , α

(1)i ), (r

(2)i , α

(2)i ), ..., (r

(t)i , α

(t)i ))

= ∆

( t∑k=1

λkβik

)(31)

where βik = ∆−1S (r

(k)i , α

(k)i ) and λ = (λ1, λ2, ..., λt)

T

is the weight vector of experts.Step 6: Rank the alternatives Xi(i = 1, 2, ...,m), based on

their group overall rating values (ri, αi) (i = 1, 2, ...,m)by using the comparison method of linguistic 2-tuplesdescribed in Section 2, and choose the best alternativeaccording to their ranking order.

We next present a real-life example to illustrate the proposedMADM technique.

VII. A PRACTICAL EXAMPLE

A U.S. based major bicycle manufacturer company is plan-ning to expand their business in the Asian market due to highdemand of their products in this region and possibility of fur-ther growth of the company. Currently, the company is facingdifficulty to meet the overseas demand as company has onlyone production unit in U.S. Owing to the lower productioncost in Asia and the potential of high growth, the managementof company has decided to open a new production unit inAsia. After initial screening, the management of company hasfound four possible potential locations in four different Asiancountries to set up their production unit. In order to identifythe best suitable location, the management of company hasformed a committee, which consists of three experts, J1, J2and J3. Analyzing all possible factors, the management ofcompany has found a set of seven attributes to evaluate thealternatives, i.e., the four locations X1, X2, X3 and X4. Theattributes are as follows:A1 : MarketA2 : Business climateA3 : Labor CharacteristicA4 : InfrastructureA5 : Availability of raw materialsA6 : Investment costA7 : Possibility for further expansionThe attributes are inter-related and interrelationship amongthem is presented in Fig. 4. From Fig. 4, we observe that all theattributes are dependent, i.e., there is no independent attributes.We also note that each attribute is dependent on only a subsetof the attribute set. In this scenario, LW-2TLEBM (26) may bemore intuitively handle the interrelationship pattern among theattributes and, therefore, to compute the alternatives’ overallperformances, we will utilize (26) further.

A1 A2 A3 A4 A5 A6 A7

A4 A6 A7 A7 A1 A6 A7 A2 A4 A3 A5

Fig. 4. Interrelationships among attributes where Ai − Aj represents theyare inter-related

TABLE III2-TUPLE LINGUISTIC DECISION MATRIX D1

A1 A2 A3 A4 A5 A6 A7

X1 (l3, 0) (l4, 0) (l2, 0) (l5, 0) (l3, 0) (l2, 0) (l3, 0)

X2 (l1, 0) (l3, 0) (l4, 0) (l4, 0) (l2, 0) (l3, 0) (l2, 0)

X3 (l4, 0) (l3, 0) (l5, 0) (l3, 0) (l5, 0) (l4, 0) (l3, 0)

X4 (l5, 0) (l4, 0) (l3, 0) (l3, 0) (l4, 0) (l5, 0) (l2, 0)

As the information is highly uncertain, experts are unable togive their preferences in numerical values. They decide to pro-vide their preferences by using 2-tuple linguistic informationaccording to the following linguistic terms set:

S = {l0 = very low (VL), l1 = low (L), l2 = moderatelylow (ML), l3 = normal (N), l4 = moderately high (MH),l5 = high (H), l6 = very high (VH)}

The linguistic assessments of the four locations given bythe experts with respect to all the attributes are presented inTables III-V.

Experts also use the linguistic variables from the abovelinguistic term set, S to assess the relative importance of theattributes. The weight vectors of the attributes provided bythe experts are summarized in Table VI. Now, the proposedMAGDM method can be applied for the selection of the best

TABLE IV2-TUPLE LINGUISTIC DECISION MATRIX D2

A1 A2 A3 A4 A5 A6 A7

X1 (l2, 0) (l5, 0) (l3, 0) (l4, 0) (l5, 0) (l3, 0) (l2, 0)

X2 (l2, 0) (l4, 0) (l5, 0) (l2, 0) (l1, 0) (l2, 0) (l5, 0)

X3 (l1, 0) (l4, 0) (l6, 0) (l4, 0) (l4, 0) (l3, 0) (l4, 0)

X4 (l4, 0) (l3, 0) (l3, 0) (l4, 0) (l5, 0) (l6, 0) (l3, 0)

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 12

TABLE V2-TUPLE LINGUISTIC DECISION MATRIX D3

A1 A2 A3 A4 A5 A6 A7

X1 (l4, 0) (l3, 0) (l2, 0) (l5, 0) (l4, 0) (l2, 0) (l3, 0)

X2 (l3, 0) (l5, 0) (l4, 0) (l3, 0) (l2, 0) (l4, 0) (l1, 0)

X3 (l4, 0) (l5, 0) (l5, 0) (l2, 0) (l4, 0) (l5, 0) (l3, 0)

X4 (l4, 0) (l3, 0) (l5, 0) (l3, 0) (l3, 0) (l5, 0) (l3, 0)

TABLE VI2-TUPLE LINGUISTIC WEIGHT OF THE ATTRIBUTES

A1 A2 A3 A4 A5 A6 A7

W1 (l4, 0) (l3, 0.3) (l4, 0) (l5, 0) (l3,−0.5) (l4, 0) (l2, 0)

W2 (l4,−0.5) (l3,−0.5) (l4, 0) (l4,−0.3) (l5, 0) (l4, 0.4) (l4, 0)

W3 (l3, 0) (l4, 0) (l5, 0) (l4, 0.3) (l5, 0) (l6, 0) (l3, 0)

TABLE VIIALTERNATIVES INDIVIDUAL OVERALL RATINGS

J1 J2 J3

X1 (l3, 0.18) (l3, 0.11) (l3, 0.20)

X2 (l3,−0.30) (l3, 0.15) (l3,−0.02)

X3 (l4,−0.41) (l4,−0.24) (l4,−0.23)

X4 (l4,−0.45) (l4,−0.06) (l4,−0.4)

location.

Step 1: From each expert’s decision matrix Dk =

(d(k)ij )4×7(k = 1, 2, 3) (provided in Tables II-IV),

by utilizing (26) with p = q = 1( for the sakeof simplicity in computation) and linguistic weightvector Wk(k = 1, 2, 3) (provided in Table V), wecompute the overall ratings of the alternatives Xi(i =1, 2, 3, 4). The aggregation results are summarized inTable VII.

Step 2: We construct the similarity matrix S of the expertsby utilizing (28). The similarity matrix is presentedin Table VIII.

Step 3: The average linguistic similarity of each expert iscomputed by using (29) as given below.

sm1 = (s′6,−0.21), sm2 = (s′6,−0.22),

sm3 = (s′6,−0.14).

TABLE VIIISIMILARITY MATRIX OF EXPERTS

J1 J2 J3

J1 (s′6, 0) (s′6,−0.28) (s′6,−0.14)

J2 (s′6,−0.28) (s′6, 0) (s′6,−0.16)

J3 (s′6,−0.14) (s′6,−0.16) (s′6, 0)

Step 4: Experts’ weight vector is derived by using (30) asfollows:

λT = (0.3324, 0.3318, 0.3358)

Step 5: The group overall ratings of the alternatives arecomputed by using the weight vector of the ex-perts (obtained in Step 4), and (31) as follows:(r1, α1) = (l3, 0.17), (r2, α2) = (l3,−0.06),(r3, α3) = (l4,−0.29), (r4, α4) = (l4,−0.30)

Step 6: According to the overall rating values, the rankingorder of the alternatives is X3 > X4 > X1 > X2

. Hence, the best suitable location to set up newproduction unit is X3.

There are some important observations in the results of theabove problem depending on the values of the parameters pand q which we would like to present below:

In the above computation we have taken the values of theparameters p and q as one. But if we take p = 1 and q = 3 thenwe obtain the group overall ratings against the alternatives asfollows:

(r1, α1) = (l3, 0.13), (r2, α2) = (l3, 0.14),

(r3, α3) = (l4,−0.19), (r4, α4) = (l4,−0.23).

The new evaluation produces a new ranking order of thealternatives as: X4 > X3 > X2 > X1. Hence, X4 is the mostdesirable alternative. This ranking result is slightly differentfrom the ranking order of the alternatives obtained by takingthe parameters p = q = 1. That is, the ranking order of thepairs X3 and X4, and X2 and X1 are reversed. Hence, theranking result may be different for different values of p and q.In general, p and q can take any values between zero to infinity.In the above example, the alternatives’ group overall ratingsare changing with respect to different values of the parametersp and q in between zero to infinity, and the results are depictedin Fig. 5. From aforementioned example and figures, weobserve that the values obtained by LW-2TLEBM also dependon the choice of the parameters p and q. It is also clear thatthese parameters are not robust [33]. For the larger values of pand q, more computational effort is required, nevertheless, inspecial cases, if one of the parameters becomes zero, then nointerrelationship among the aggregated arguments is captured.Therefore, for practical applications’, we suggest to take thevalues of parameters p and q as one, which is not only intuitiveand simple, but also reflects the interrelationship among theaggregated arguments.

A. Comparison of performances with the others existing ag-gregation operators

To further illustrate the applicability of the proposed op-erator, we solve the above location selection problem byusing five existing linguistic aggregation operators: weighted2-tuple linguistic arithmetic mean (W2TLAM) operator,weighted 2-tuple linguistic geometric mean (W2TLGM) oper-ator, weighted 2-tuple linguistic harmonic mean (W2TLHM)operator, 2-tuple linguistic weighted power arithmetic mean(2TLWPAM) and weighted 2-tuple linguistic Bonferroni mean(W2TLBM).

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 13

Fig. 5. (a), (b), (c), and (d) represent the changes in the group overall ratingsof the alternatives X1, X2, X3 and X4, respectively, when the parameters pand q of (26) are taking the values from the interval (0, 20].

TABLE IXGROUP OVERALL RATINGS OF THE ALTERNATIVES OBTAINED BY USING

DIFFERENT AGGREGATION OPERATOR

Alternatives W2LAM W2TLGM W2LHM W2TPAM W2LBM

X1 (l3, 0.27) (l3, 0.08) (l3,−0.1) (l3, 0.06) (l3, 0.23)

X2 (l3, 0) (l3,−0.32) (l2, 0.33) (l3,−0.27) (l3,−0.06)X3 (l4,−0.1) (l4,−0.31) (l3, 0.43) (l4,−0.31) (l4,−0.13)X4 (l4,−0.08) (l4,−0.21) (l4,−0.34) (l4,−0.26) (l4,−0.12)

TABLE XRANKING ORDER OF THE ALTERNATIVES IN DIFFERENT CASE

Aggregation operator Ranking order of the alternatives

W2LAM X4 > X3 > X1 > X2

W2TLGM X4 > X3 > X1 > X2

W2LHM X4 > X3 > X1 > X2

W2LBM X4 > X3 > X1 > X2

W2PAM X4 > X3 > X1 > X2

W2TLEBM X3 > X4 > X1 > X2

In order to compare performance of W2TLEBM with theaforementioned aggregation operators, we compute alterna-tives’ overall performances from each expert’s decision matrixDk by using these aggregation operators and following thesame steps as in the proposed decision making process,alternatives’ group overall performances are calculated. Theresult of each case is summarized in the Table IX. Based onthe alternatives’ group overall performances, ranking order ofthe alternatives in each case are presented in Table X.

It is clear from the Table IX that the ranking orders

of the alternatives obtained by these existing operators aresignificantly different from the ranking result obtained byusing W2TLEBM operator. W2TLEBM operator identifies X3

as the most desirable location to set up the production plantwhereas other aggregation operators choose X4 as the bestlocation. The main reason behind the significant differences inthe ranking order is that the W2TLEBM operator can modelthe exact relationships among the attributes while in otheraggregation techniques there is no such scope of modeling thiskind of interrelationship among the aggregated arguments.

VIII. CONCLUSION

In this paper, we have proposed an aggregation operatorwhich we refer to as the EBM. The proposed operator isable to more intuitively handle interrelationship pattern amongthe attributes. Semantically, EBM can model heterogeneousconnections among the attributes where BM captures homo-geneous relationships among the attributes. Furthermore, wehave examined that for a certain input set, the aggregatedvalues, computed by EBM, varies depending on the interre-lationship pattern of the input arguments. We have discussedvariety of special cases of EBM operator. Moreover, EBMoperator satisfies the properties of mean-type aggregation oper-ator, such as, idempotency, monotocity, boundedness. Furtherto deal with linguistic information, we have extended it to 2-tuple linguistic environment and proposed 2-tuple linguisticextended Bonferroni mean (2TLEBM) operator. The desirableproperties of 2TLEBM have been studied in details. Takingdifferent values of the parameters p and q, we have shownthat 2-tuple linguistic arithmetic mean and geometric meanare special cases of 2TLEBM. Based on the weighted formof 2TLEBM and linguistic similarity measure, a technique forsolving MAGDM problems has been developed. Finally by thehelp of site selection problem, we have shown that W2TLEBMis capable to capture the specific interrelationships among theattributes while other existing linguistic aggregation operatorsincluding BM fail to reflect the exact interrelationship amongthe attributes.

The main advantages of the proposed MAGDM techniquecan be pointed out as follows: (1) by taking conjunction ofsatisfaction of only inter-related attributes, the proposed lin-guistic EBM operator, not only can model the heterogeneousconnection among attributes, but also can avoid the effect ofconjunction of unrelated attributes during the aggregation ofthe alternatives’ performances under different attributes; (2) byinterpreting similarity between 2-tuple linguistic informationusing linguistic terms, the proposed similarity measure helpsthe experts to understand it more intuitively than numericalvalue representation (3) linguistic weights of the attributes helpthe experts to express their uncertainty in weight informationmore comfortably.

In further research, other type of interrelationship among theattributes and its reflection in aggregation process needs to beexplored by introducing new types of aggregation operators.

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 14

APPENDIX ADEDUCTION OF 2-TUPLE LINGUISTIC POWER ARITHMETIC

MEAN FROM 2TLEBM

From (15), we have

2TLEBMp,0((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

βpi

(1

card(Ii)∑j∈Ii

β0j

)) pp+0

+card(I ′)

n

(1

card(I ′)

∑i∈I′

βpi

)) 1p

= ∆

(1

n

∑i/∈I′

βpi +

1

n

∑i∈I′

βpi

) 1p

= ∆

(1

n

n∑i=1

βpi

) 1p

APPENDIX BDEDUCTION OF 2TLEBM FROM W2TLEBM

From (23), we have

W2TLEBMp,q((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(((1−

∑i∈I′

1

n)

(1

1−∑

i∈I′1n

∑i/∈I′

1

nβpi

(1∑

j∈Ii1n∑

j∈Ii

1

nβqj

)) pp+q

+∑i∈I′

1

n

(1∑

i∈I′1n

∑i∈I′

1

nβpi

)) 1p)

= ∆

(((1− card(I ′)

n)

(1

1− card(I′)n

∑i/∈I′

1

nβpi

(1

card(Ii)n∑

j∈Ii

1

nβqj

)) pp+q

+card(I ′)

n

(1

card(I′)n

∑i∈I′

1

nβpi

)) 1p)

= ∆

(n− card(I ′)

n

(1

n− card(I ′)

∑i/∈I′

βpi

(1

card(Ii)∑j∈Ii

βqj

)) pp+q

+card(I ′)

n

(1

card(I ′)

∑i∈I′

βpi

)) 1p

APPENDIX CDEDUCTION OF WEIGHTED 2-TUPLE LINGUISTIC

ARITHMETIC FROM W2TLEBM

From (23), we obtain

W2TLEBM1,0((g1, α1), (g2, α2), ..., (gn, αn))

= ∆

(((1−

∑i∈I′

wi)

(1

1−∑

i∈I′ wi

∑i/∈I′

wiβi

(1∑

j∈Ii wj∑j∈Ii

wjβ0j

))+∑i∈I′

wi

(1∑

i∈I′ wi

∑i∈I′

wiβi

))

= ∆

(∑i/∈I′

wiβi +∑i∈I′

wiβi

)

= ∆

( n∑i=1

wiβi

)

ACKNOWLEDGMENT

We are very grateful to the Editors and the anonymousreviewers for their insightful and constructive comments andsuggestions for the improvement of the manuscript.

REFERENCES

[1] G. Bordogna, M. Fedrizzi, and G. Pasi, “A linguistic modeling ofconsensus in group decision making based on OWA operators,” IEEETrans. Syst. Man, Cybern. - Part A Syst. Humans, vol. 27, no. 1, pp.126–133, 1997.

[2] F. Herrera and L. Martınez, “An approach for combining linguisticand numerical information based on the 2-tuple fuzzy linguistic rep-resentation model in decision-making,” Int. J. Uncertainty, FuzzinessKnowledge-Based Syst., vol. 8, no. 5, pp. 539–562, 2000.

[3] F. Herrera and L. Martınez,“A model based on linguistic 2-tuples fordealing with multigranular hierarchical linguistic contexts in multi-expert decision-making” IEEE Trans. Syst. Man. Cybern. B. Cybern.,vol. 31, no. 2, pp. 227–34, Jan. 2001.

[4] F. Herrera, L. Martınez, and P. Sanchez, “Managing non-homogeneousinformation in group decision making,” Eur. J. Oper. Res., vol. 166,no. 1, pp. 115–132, 2005.

[5] G. Wei, “A method for multiple attribute group decision makingbased on the ET-WG and ET-OWG operators with 2-tuple linguisticinformation,” Expert Syst. Appl., vol. 37, no. 12, pp. 7895–7900, 2010.

[6] G. Wei, “Some generalized aggregating operators with linguistic in-formation and their application to multiple attribute group decisionmaking,” Comput. Ind. Eng., vol. 61, no. 1, pp. 32–38, 2011.

[7] G. Wei and X. Zhao, “Some dependent aggregation operators with 2-tuple linguistic information and their application to multiple attributegroup decision making,” Expert Syst. Appl., vol. 39, no. 5, pp. 5881–5886, 2012.

[8] S.-P. Wan, “2-tuple linguistic hybrid arithmetic aggregation operatorsand application to multi-attribute group decision making,” Knowledge-Based Syst., vol. 45, no. 0, pp. 31 – 40, 2013.

[9] J. H. Park, J. M. Park, and Y. C. Kwun, “2-Tuple linguistic harmonicoperators and their applications in group decision making,” Knowledge-Based Syst., vol. 44, pp. 10–19, 2013.

[10] W. Yang and Z. Chen, “New aggregation operators based on the Choquetintegral and 2-tuple linguistic information,” Expert Syst. Appl., vol. 39,no. 3, pp. 2662–2668, 2012.

[11] Y. Xu and H. Wang, “Approaches based on 2-tuple linguistic poweraggregation operators for multiple attribute group decision making underlinguistic environment ,” App. Soft Comput., vol. 11, no. 5, pp. 3988 –3997, 2011.

[12] G. Wei, “Extension of TOPSIS method for 2-tuple linguistic multipleattribute group decision making with incomplete weight information,”Knowl. Inf. Syst., vol. 25, no. 3, pp. 623–634, 2009.

[13] G. Wei, “Grey relational analysis method for 2-tuple linguistic multipleattribute group decision making with incomplete weight information,”Expert Syst. Appl., vol. 38, no. 5, pp. 4824 – 4828, 2011.

[14] M. Koksalan and C. Ulu, “An interactive approach for placing alter-natives in preference classes,” Eur. J. Oper. Res., vol. 144, no. 2, pp.429–439, 2003.

[15] R. Degani and G. Bortolan, “The problem of linguistic approximationin clinical decision making,” Int. J. Approx. Reason., vol. 2, no. 2, pp.143–162, 1988.

[16] M. Delgado, J. L. Verdegay, and M. A. Vila, “On aggregation operationsof linguistic labels,” Int. J. Intell. Syst., vol. 8, no. 3, pp. 351–370, 1993.

[17] F. Herrera and L. Martınez, “A 2-tuple fuzzy linguistic representationmodel for computing with words,” IEEE Trans. Fuzzy Syst., vol. 8, no. 6,pp. 746–752, 2000.

[18] J.-H. Wang and J. Hao, “A new version of 2-tuple fuzzy linguisticrepresentation model for computing with words,” IEEE Trans. FuzzySyst., vol. 14, no. 3, pp. 435–445, Jun. 2006.

[19] Y. Dong, G. Zhang, W.-C. Hong, and S. Yu, “Linguistic ComputationalModel Based on 2-Tuples and Intervals,” IEEE Trans. Fuzzy Syst.,vol. 21, no. 6, pp. 1006–1018, Dec. 2013.

[20] R. M. Rodrıguez and L. Martınez, “An analysis of symbolic linguisticcomputing models in decision making,” Int. J. Gen. Syst., vol. 42, no. 1,pp. 121–136, 2012.

[21] L. Martınez and F. Herrera, “An overview on the 2-tuple linguistic modelfor computing with words in decision making: Extensions, applicationsand challenges ,” Inform. Sci., vol. 207, no. 1, pp. 1–18, 2012.

1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems

IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, - 15

[22] Y. Jiang and J. Fan, “Property analysis of the aggregation operatorsfor 2-tuple linguistic information,” Control and Decision, vol. 18, pp.754–757, 2003.

[23] J. M. Merigo and A. M. Gil-Lafuente, “Induced 2-tuple linguisticgeneralized aggregation operators and their application in decision-making,” Inform. Sci., vol. 236, pp. 1–16, 2013.

[24] B. Zhu, Z. Xu, and M. Xia, “Hesitant fuzzy geometric Bonferronimeans,” Inform. Sci., vol. 205, pp. 72–85, 2012.

[25] C. Bonferroni, “Sulle medie multiple di potenze,” Bolletino MatematicaItaliana, vol. 5, pp. 267–270, 1950.

[26] R. R. Yager, “On generalized Bonferroni mean operators for multi-criteria aggregation,” Int. J. Approx. Reason., vol. 50, no. 8, pp. 1279–1286, 2009.

[27] G. Beliakov, S. James, J. Mordelova, T. Ruckschlossova, and R. R.Yager, “Generalized Bonferroni mean operators in multi-criteria aggre-gation,” Fuzzy Sets Syst., vol. 161, no. 17, pp. 2227–2242, 2010.

[28] M. Xia, Z. Xu, and B. Zhu, “Generalized intuitionistic fuzzy Bonferronimeans,” Int. J. Intell. Syst., vol. 27, no. 1, pp. 23–47, 2012.

[29] W. Zhou and J.-M. He, “Intuitionistic fuzzy normalized weightedBonferroni mean and its application in multicriteria decision making,”J. Appl. Math., vol. 2012, Article ID 136254.

[30] G. Beliakov, S. James, and R. Mesiar, “A Generalization of the Bon-ferroni Mean based on partitions,” in Proc. of IEEE Int. Conf. on FuzzySyst. FUZZ-IEEE, 2013, article no. 6622348.

[31] G. Choquet, “Theory of capacities,” Annales de l’institut Fourier, vol. 5,pp. 131–295, 1954.

[32] M. Xia, Z. Xu, and B. Zhu, “Geometric Bonferroni means with theirapplication in multi-criteria decision making,” Knowledge-Based Syst.,vol. 40, pp. 88–100, 2013.

[33] Z. Xu and R. R. Yager, “Intuitionistic fuzzy Bonferroni means.” IEEETrans. Syst. Man. Cybern. B. Cybern., vol. 41, no. 2, pp. 568–78, 2011.

[34] B. Dutta and D. Guha, “Trapezoidal intuitionistic fuzzy Bonferronimeans and its application in multi-attribute decision making,” in Proc. ofIEEE Int. Conf. on Fuzzy Syst. FUZZ-IEEE, 2013, article no. 6622367.

[35] O. Cordon, F. Herrera and I. Zwir, “Linguistic modeling by hierarchicalsystems of linguistic rules”, IEEE Trans. Fuzzy Syst., vol. 10, pp. 2–20,2002.

[36] G. A. Miller, “The magical number seven, plus or minus two: Somelimits on our capacity of processing information”, Psychol. Rev., vol. 63,pp. 81–97, 1956.

[37] D. Guha and D. Chakraborty, “A new approach to fuzzy distance mea-sure and similarity measure between two generalized fuzzy numbers,”App. Soft Comput., vol. 10, no. 1, pp. 90 – 99, 2010.

[38] H. Bustince, E. Barrenechea and M. Pagola, “Restricted equivalencefunctions”, Fuzzy Sets Syst., vol. 157, no. 17, pp. 2333–2346, 2006.

[39] J. Spirkova, “Weighted operators based on dissimilarity function”,Inform. Sci., vol. 281, pp.172–181, 2014.

[40] P. Bonacich and T. M. Liggett, “Asymptotics of a matrix valued Markovchain arising in sociology”, Stoch. Proc. Appl., vol. 104, pp. 155–171,2003.

[41] C. H. Chu, K. C. Hung and P. Julian, “A complete pattern recognitionapproach under Atanassov’s intuitionistic fuzzy sets”, Knowledge-BasedSyst., vol. 66, pp. 36–45, 2014.

Bapi Dutta received his M.Sc. degree in industrialmathematics and informatics from Indian Instituteof Technology, Roorkee, India in 2010 and B.Sc.Degree in Mathematics from University of Kalyani,Kalyani, India in 2008. He is currently Ph.D candi-date in department of mathematics in Indian Instituteof Technology, Patna, India.

His current research interests include: aggregationoperators, multi-attribute decision making, fuzzy op-timization, type-2 fuzzy logic.

Debashree Guha received her B.Sc. and M.Sc.degree in mathematics from Jadavpur University,Calcutta, India in 2003 and 2005, respectively. Shereceived her Ph.D degree in Mathematics from In-dian Institute of Technology, Kharagpur, India in2011.

Presently, she is the assistant professor of depart-ment of mathematics in Indian Institute of Tech-nology, Patna, India. Her current research inter-est includes: multi-attribute decision making, fuzzymathematical programming, aggregation operators

and fuzzy logic. Her research results have been published in the Computer &Industrial Engineering, Applied Soft Computing, among others.

Radko Mesiar received the Ph.D. Degree fromComenius University, Bratislava, Slovakia, and theD.Sc. degree from the Czech Academy of Sciences,Prague, in 1979 and 1996, respectively.

He is a Professor of mathematics at Slovak Uni-versity of Technology, Bratislava. His major researchinterests are in the area of uncertainty modeling,fuzzy logic and several types of aggregation tech-niques, nonadditive measures, and integral theory.He is coauthor of a monograph on triangular normsand author/coauthor of more than 200 journal papers

and chapters in edited volumes. He is an Associate Editor of four internationaljournals and a member of the European Association for Fuzzy Logic andTechnology. He is a Fellow Researcher with UTIA AV CR Prague (since1995) and IRAFM Ostrava (since 2005).