• Sonuç bulunamadı

Two approaches for fair resource allocation

N/A
N/A
Protected

Academic year: 2021

Share "Two approaches for fair resource allocation"

Copied!
69
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

TWO APPROACHES FOR FAIR RESOURCE

ALLOCATION

a thesis submitted to

the graduate school of engineering and science

of bilkent university

in partial fulfillment of the requirements for

the degree of

master of science

in

industrial engineering

By

Mirel Yavuz

May 2018

(2)

TWO APPROACHES FOR FAIR RESOURCE ALLOCATION By Mirel Yavuz

May 2018

We certify that we have read this thesis and that in our opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Master of Science.

¨

Ozlem Karsu(Advisor)

Ay¸se Selin Kocaman

Melih C¸ elik

Approved for the Graduate School of Engineering and Science:

Ezhan Kara¸san

(3)

ABSTRACT

TWO APPROACHES FOR FAIR RESOURCE ALLOCATION

Mirel Yavuz

M.S. in Industrial Engineering Advisor: ¨Ozlem Karsu

May 2018

Fairness has become one of the primary concerns in several Operational Research (OR) problems, especially in resource allocation problems. It is crucial to ensure a fair distribution of the resources across the entities so that the proposed solutions will be both applicable and acceptable. In many real-life systems, the most efficient solution will not be the most fair solution, which creates a trade-off between efficiency and fairness. We propose two approaches in order to help the decision makers (DM) to find an efficient solution which take fairness of the distribution of resources into account.

First approach we propose is optimizing a specific subset of the set of Schur-concave functions, namely ordered weighted averaging (OWA) functions, which are able to reflect both efficiency and fairness concerns. We do not assume that the weights of the DM to be used in OWA functions are readily available. We explore a wide range of weight vectors and report results for these different choices of weights. We illustrate the approach using a workload allocation problem and a knapsack problem and visualize the trade-off between fairness and efficiency.

In some applications, the DM may provide a reference point such that the aim would be finding an efficient solution which is more preferable than this reference in terms of fairness. For such cases we propose a second approach that maximizes efficiency while controlling fairness concerns via a constraint. Similar to the first approach, fairness concerns are reflected using OWA function forms. However, the resulting formulation yields to non-linearity. Thus, a hybrid interactive algorithm is presented that tackles this nonlinearity using an enumerative approach. The algorithm finds an efficient solution which OWA dominates the reference point by interacting with the DM. The algorithm is tested on knapsack problems and shows successful performance.

Keywords: Resource allocation problem, Fairness, Knapsack problem, Interactive algorithm, Ordered weighted averaging.

(4)

¨

OZET

ES

¸ ˙ITL˙IKC

¸ ˙I KAYNAK DA ˘

GITIMINA ˙IK˙I YAKLAS

¸IM

Mirel Yavuz

End¨ustri M¨uhendisli˘gi, Y¨uksek Lisans Tez Danı¸smanı: ¨Ozlem Karsu

Mayıs 2018

E¸sitlik¸ci bir yakla¸sıma sahip olmak, bir ¸cok Y¨oneylem Ara¸stırması (YA) probleminde, ¨

ozellikle de kaynak da˘gıtımı problemlerinde ¨oncelikli hale gelmi¸s bulunmaktadır. ¨Onerilen ¸c¨oz¨umlerin kabul edilebilirli˘gi ve uygulanabilirli˘gi a¸cısından kaynakların sistemin elemanları arasında e¸sit bir da˘gılıma sahip olması b¨uy¨uk ¨onem ta¸sımaktadır. Pek ¸cok ger¸cek hayat siste-minde, en verimli ¸c¨oz¨um en e¸sitlik¸ci ¸c¨oz¨um de˘gildir; bu da verimlilik ve e¸sitlik¸cilik arasında ¨

od¨unle¸smeye neden olmaktadır. Bu g¨ozlemden yola ¸cıkarak, karar vericilere kaynakların e¸sitlik¸ci da˘gıtımını da g¨oz ¨on¨unde bulundurarak verimli ¸c¨oz¨um bulmaya yardımcı olmak i¸cin iki yakla¸sım ¨onerilmi¸stir.

˙Ilk yakla¸sımda, hem verimlilik hem de e¸sitlik¸cilik kaygılarını yansıtabilen Schur-konkav fonksiyonların ¨ozel bir alt k¨umesi olan sıralı a˘gırlıklandırılmı¸s ortalama (SAO) fonksiyon-larının kullanımı ¨onerilmektedir. SAO fonksiyonunda kullanılacak olan ve karar vericinin tercihlerini yansıtan a˘gırlıkların ¨onceden bilinmedi˘gini varsayılmı¸stır. Geni¸s bir aralıkta de˘gi¸sen a˘gırlık vekt¨orleri incelenmi¸s ve bu farklı a˘gırlık se¸cimleri i¸cin sonu¸clar rapor-lanmı¸stır. Onerilen yakla¸sımın, i¸s y¨¨ uk¨u da˘gıtım problemi ve sırt ¸cantası problemi i¸cin kullanımı g¨osterilmi¸s ve bu problemlerde verimlilik ve e¸sitlik¸cilik arasındaki ¨od¨unle¸sme g¨orselle¸stirilmi¸stir.

Bazı uygulamalarda ise ama¸c, karar verici tarafından belirlenen bir referans da˘gılımdan e¸sitlik a¸cısından daha tercih edilebilir olan en verimli ¸c¨oz¨um¨u bulmaktır. Bu problemler i¸cin verimlili˘gi en¸coklamayı ama¸clayan ve e¸sitlik kaygılarını bir kısıt vasıtasıyla kontrol eden bir yakla¸sım ¨onerilmi¸stir. ˙Ilk yakla¸sımda oldu˘gu gibi e¸sitlik kaygıları SAO fonksiyonu kul-lanılarak modele yansıtılmı¸stır. Ancak, elde edilen form¨ulasyon do˘grusal olmayan terimler i¸cermektedir. Bu nedenle, do˘grusal olmayan kısmın ¸c¨oz¨um¨u i¸cin sayıma dayalı yakla¸sım kul-lanan, karar vericiyle etkile¸simli melez bir algoritma tasarlanmı¸stır. Bu algoritmanın amacı SAO fonksiyonuna g¨ore referanstan baskın olan verimli bir ¸c¨oz¨um bulmaktır. Algoritma, sırt ¸cantası problemlerinde test edilmi¸s ve ba¸sarılı bir performans sergilemi¸stir.

Anahtar s¨ozc¨ukler : Kaynak Da˘gıtım Problemi, E¸sitlik, Sırt ¸cantası problemi, Etkile¸simli algoritma, Sıralı a˘gırlıklandırılmı¸s ortalama.

(5)

Acknowledgement

First of all, I would like to thank TUBITAK (The Scientific and Technological Research Council of Turkey) for their financial support throughout this study (under Grant no: 215M713).

Furthermore, I would like to express my sincere gratitude to my thesis advisor Asst. Prof. Dr. ¨Ozlem Karsu for her guidance, support and patience throughout this study. Moreover, I would like to express my special gratitude to Asst. Prof. Dr. Ay¸se Selin Kocaman and Asst. Prof. Dr. Melih C¸ elik for reviewing my thesis and their valuable feedbacks. I would also like to thank whole Industrial Engineering Department of Bilkent University for their encouragement.

Last but not least, I would like to express my indebtedness to my family for supporting me in every single moment in my life and my dearest friends for their constant encouragement, care and affection during this thesis.

(6)

Contents

1 Introduction and Problem Definition 1

2 Literature Review 5

2.1 Social Welfare Function Based Approaches . . . 6

2.2 Inequality Index Based Approaches . . . 7

3 Social Welfare Function Based Approach 9 3.1 Schur Concave Social Welfare Functions . . . 9

3.2 Selecting weights of the OWA function . . . 12

3.3 Finding ordered permutations of the allocation vectors . . . 14

3.3.1 Binary Methodology . . . 14

3.3.2 Continuous Methodology . . . 15

3.4 Mathematical Formulations . . . 16

3.4.1 Binary Model: . . . 16

3.4.2 Continuous Model: . . . 17

4 Computational Experiments for the Social Welfare Function Based Ap-proach 19 4.1 Workload Allocation . . . 19

(7)

CONTENTS vii

4.2 Knapsack Problem . . . 26

4.2.1 Real-life example . . . 29

4.2.2 Randomly Generated Knapsack Problems . . . 31

5 Ordered Weighted Averaging (OWA) Dominance Based Approach 33 5.1 General Structure of the OWA Dominance Based Approach with a Reference Point . . . 33

5.2 Algorithm 1 . . . 37

5.3 Algorithm 2 . . . 42

5.4 Hybrid Interactive Algorithm . . . 44

6 Computational Experiments for the OWA Dominance Based Approach 45 7 Conclusion 51 A SWF Based Approach 55 A.1 Graphs Indicating Trade-off between Efficiency and Fairness for the Random-ized Knapsack Problem . . . 55

A.2 Lorenz Curves for the Randomized Knapsack Problem . . . 57

B OWA Dominance Based Approach 58 B.1 Models for the Hybrid Interactive Algorithm in Categorized Knapsack Setting 58 B.1.1 Efficiency Model (EM) . . . 58

B.1.2 Feasibility Model (FM) . . . 59

(8)

List of Figures

2.1 Division of Approaches to Incorporate Fairness . . . 5

3.1 Relationship between weights and OWA functions . . . 13

4.1 Workload Allocation . . . 20

4.2 Total Workload versus l . . . 25

4.3 Lorenz Curves for the Workload Allocation . . . 26

4.4 Total Benefit versus l for the Real-life Knapsack Problem . . . 30

4.5 Lorenz Curves for the Real-life Knapsack Problem . . . 31

5.1 Feasible Weight Sets . . . 36

5.2 Flow Chart of Algorithm 1 . . . 41

5.3 Flow Chart of Algorithm 2 . . . 43

A.1 Trade-off between Efficiency and Fairness for the Randomized Knapsack Prob-lem, n=150, m=5 . . . 55

A.2 Trade-off between Efficiency and Fairness for the Randomized Knapsack Prob-lem, n=50, m=3 . . . 56

A.3 Lorenz Curves for the Randomized Knapsack Problem, n=150, m=5 . . . . 57

(9)

List of Tables

4.1 Ability Levels that Employees Acquire . . . 22

4.2 Ability Levels that Tasks Require . . . 22

4.3 Relationship between Tasks and Employees . . . 22

4.4 Time Units Required by Employees to Perform Tasks . . . 23

4.5 Small Example for Categorized Knapsack . . . 27

4.6 Results for the Randomized Knapsack . . . 32

6.1 Comparison of Algorithm 1 and Algorithm 2 . . . 46

6.2 Efficiency Comparison of Algorithm 1 and Algorithm 2 . . . 47

6.3 Example Allocation, n=50, m=5 . . . 47

6.4 Results of the Hybrid Interactive Algorithm for Categorized Knapsack with Reference Point . . . 49

(10)

Chapter 1

Introduction and Problem Definition

There are many real life applications where equity concerns arise. The applications include but are not limited to resource allocation [1], [2], workload allocation [3] ,[4], airflow traffic management [5], [6], logistics [4],[7] and scheduling [8] (see [9] for details on these and other applications). In these settings, a system is designed or operated for multiple users and it is important to ensure equity over the benefits/resources that they enjoy so that the proposed solutions will be both applicable and acceptable.

Thanks to its relevance in many Operational Research (OR) applications, there has been a growing interest in research devoted to address fairness concerns of decision makers in various optimization settings. Karsu and Morton [9] provide an extensive list of OR applications reported in the literature that involve fairness concerns. They classified the methods used to incorporate fairness concerns into optimization settings into two classes: Inequality index based approaches and social welfare function based approaches. In the inequality index based approach, efficiency is maximized and an inequality index is used in the constraints to ensure a certain degree of fairness in the proposed solution. The index is sometimes used as an additional objective function to be minimized. The social welfare function based approach defines and maximizes a general social welfare function that encourages efficiency and fairness. The solution approach used in the first part of this thesis in order to solve the problems having fairness concerns falls under the category of the social welfare function based approaches. In the other half of the thesis, the approach taken is similar to the inequality index based approach in the sense that it incorporates constraints in order to ensure fairness in the allocation.

Karsu and Morton [9] also categorize the applications with respect to whether the users (entities from now on) are indistinguishable or not. If the identities of the entities matter, then the corresponding concern is balance. In such case the ultimate aim may not be providing every one with the same amount of benefit. However, when there is anonymity

(11)

and the identities of the entities do not affect the decision, the ”most fair” solution would be providing everyone with the same amount of benefit.

Fairness itself is typically not the only concern in real life applications. One of the key challenges in inequity averse optimization is the trade-off between efficiency and fairness. In most situations, the most fair allocation will not be the most efficient one and vice versa. Hence the decision makers eventually have to make choices that involve such trade-offs. In the social welfare function based approaches, the form of the appropriate functional forms are chosen so as to reflect both efficiency and fairness concerns as we will discuss in the upcoming sections. As aforementioned, in inequality index based approach, efficiency concern is typically reflected in the objective function and the fairness concern is incorporated using constraints. There are also multi-criteria decision making applications, where fairness concerns are handled by adding the inequality index as another objective to the model.

The general problem that we consider in this study can be summarized as follows: We assume that a decision maker (DM) has to make a decision which will result in a distribution of some benefits to a set of entities. It is assumed that, the entities are indistinguishable, i.e., no one is more entitled to the benefit than the others. The DM wants to optimize some measure which is related to system efficiency but she is also concerned about the disparity of the obtained distribution of benefits across the entities. As an example one may think of a health care resource allocation problem. In these problems health care resources are distributed across various population groups. In these cases efficiency concerns correspond to maximizing the total expectancy of life years while trying to avoid unfairness among the patient groups/individuals.

To put it in a more formal way, the decision maker wants to choose an option x from some set X ∈ Rn of actions/decisions that are available to him. A set of entities, K = {1, ..., m} that receive the benefit as a result of the decision maker’s choice is considered. In particular, it is assumed that the benefit enjoyed by entity k is given by zk = gk(x), where gk(·) : Rn → R. For simplicity, we will also use z = G(x) = (g1(x), ..., gm(x)) and Z = {z|z = G(x), x ∈ X}.

For this setting we will discuss two approaches: social welfare function based and OWA dominance based.

In the social welfare function approach, the aim is to maximize a social welfare function, which is a function of the benefit allocation. Letf (·) : Rm → R be the social welfare function to be optimized (assume a maximization setting without loss of generality).

(12)

The generic structure of the model is as follows:

Social Welfare Function Based Model M aximize f (z)

subject to x ∈ X z = G(x)

For instance, consider again a health care project selection problem. Assume that the health care projects focus on different patient groups (e.g. some may be beneficial to cancer patients while some others may be beneficial to newborns). So, the aim is to have an efficient and fair allocation of benefits among the patient groups. In this case,x refers to the decision of funding projects (which may be a vector of 0-1s), entities correspond to different patient groups and z represents the benefit that these groups receive as a result of the decision x. Benefit could be the resulting quality adjusted life years (QALY) that a patient group will receive, if the corresponding project is initiated. f (z) is a social welfare function to be maximized, defined over the QALY distribution to the patient groups, that incorporates both fairness and efficiency.

As aforementioned, the social welfare function should be in such a way that it incorporates both efficiency and fairness concerns. Therefore, it should be increasing and in line with the Pigou-Dalton Principle of Transfers so that maximizing it will yield to a fair and efficient solution ([9]). More discussion on the properties of such functions will be provided in the following chapters.

In the second approach to be introduced in this thesis (OWA dominance based), which is similar to the inequality index based approach, the aim is to maximize efficiency such that the resulting allocation will be preferred compared to a reference point the DM provides in terms of fairness.

Similar to the previous problem setting, we assume that DM has an option from some set X ∈ Rn of actions/decisions. K = {1, ..., m} is the set of entities that receive the benefit as a result of the decision maker’s choice. Let u(·) : Rn → R be a function representing the efficiency of the chosen option x to be optimized (assume a maximization setting without loss of generality) andv(·) : Rm → R be a function incorporating DM’s preferences including fairness. The benefit enjoyed by entityk is again given by zk =gk(x). Let a be the reference allocation provided by the DM such that DM wants a solution yielding a better allocation of benefits than this reference a in terms of fairness.

(13)

The generic structure of the model can be given as follows: OWA Dominance Based Model

M aximize u(x) subject to x ∈ X

z = G(x) v(z) > v(a)

In this setting focus is obtaining a solution that is as efficient as possible (efficiency could be any suitable function defined over the decisions x) and at the same time not worse than a given reference allocation (a) in terms of fairness. If the function v(·) is an inequality index, then the approach would be the inequality index based approach. However, in this thesis, function v(·) will be chosen such that it both incorporates fairness and efficiency concerns. Hence, this approach is similar to but not the same as inequality index based approach.

Recall our example of health care project selection problem. If the OWA dominance based model is used, then u(x) may refer to various efficiency measures such as maximizing total QALY gained, total number of life years gained, number of deaths averted etc. The reference allocation vector a may refer to the allocation of QALYs that patient groups will receive.

The organization of the thesis will be as follows: In Chapter 2, a literature review on how to solve problems having fairness concerns will be provided.

Chapters 3 and 4 will be devoted to the Social Welfare Function based approach. In Chapter 3, the Social Welfare Function based approach will be introduced by explaining the Schur-concave social welfare functions and their appropriateness for decision making problems having fairness concerns. Then, the chosen Schur-concave social welfare function forms called Ordered Weighted Averaging (OWA) will be explained. OWA functions rely on weight vector parameters. We will explain how to select these weights systematically to reflect different degrees of fairness.

In Chapter 4, we will provide the computational experiments of this Social Welfare Func-tion based approach on both case studies from the literature and randomly generated knap-sack type problem instances.

Chapters 5 and 6 will be devoted to the second approach that we will consider, which we call Ordered Weighted Averaging (OWA) dominance based approach. In Chapter 5 we will introduce the approach and explain the interactive algorithms that we propose.

In Chapter 6 the computational experiments for the OWA dominance based approach will be provided. Chapter 7 will be the conclusion and provide a discussion on future work.

(14)

Chapter 2

Literature Review

In the OR literature, there are several studies that worked on problems having fairness concerns while maximizing efficiency. The methods used in order to incorporate fairness into the optimization models can be classified into two main categories. Figure 2.1 below summarizes this categorization.

Methods Used to Incorporate Fairness Inequality Index Based Approach Social Welfare Function

Based Approach - In the constraints - In the objective function - Schur-concave - Symmetric Quasi-concave - Symmetric concave - Ordered Weighted Averaging (OWA)

Figure 2.1: Division of Approaches to Incorporate Fairness

In the next two sections we discuss the works in the literature that fall into the first and second categories, respectively.

(15)

2.1

Social Welfare Function Based Approaches

In the Social Welfare Function Based Approach, (assuming the allocation of a good ) the aim is to maximize a function which incorporates both fairness and efficiency. In order to be able to do that, such social welfare functions need to be increasing, symmetric and in line with the Pigou-Dalton principle of transfers.

In literature, there are different types of social welfare functions used such as symmetric concave, symmetric quasi-concave and Schur-concave functions. (Of course, if the allocation is based on a bad instead of a good, the aforementioned functions to be optimized will be symmetric convex, symmetric quasi-convex and Schur-convex, respectively [9]).

Ball et al. [6] worked on matching problems that arise in ground delay program (GDP) planning. GDP occurs when airport arrival demand is expected to exceed arrival capacity for an extended length of time. They aimed minimizing total delay along with ensuring a fair distribution of the total delay across individual flights. To this end, they used Schur-convex aggregation functions. Mar´ın et al. [10] used symmetric concave “ordered median functions” for discrete location problems. Ordered median functions are weighted total cost functions, where weights are rank-dependent. They suggested a reformulation of the discrete ordered median problem as a covering model and extended this covering model so that the negative weights are feasible in the model as well, making the ordered median function to be symmetric and strictly concave. Hence, they showed that several discrete location problems with fairness objectives are special cases of such an extension. Martin et al. [11] worked on a nurse rostering problem, which is a scheduling problem, based on Bilgin et al. [12]. There are soft and hard constraints defined for the system. Hard constraints, such as no overlap between assignments, should be satisfied for the solution to be feasible, and all violations of the soft constraints, such as resting times of the nurses, will be penalized. They minimized a convex function such that the distribution of the violation of the soft constraints in nurses’ roster across nurses is fair so that no nurse is favored.

In the Social Welfare Function Based Approach, one needs to select a specific objective function, which incorporates the efficiency and fairness concerns appropriately. Selecting such a function is not an easy task and may yield to failure in getting more fulfilling results. However, if a sufficiently suitable function can be selected, then formulating and solving the model may be relatively easy.

All the above cases are specific studies devoted to a given real-life application. In the first half of this thesis, we propose a generic Social Welfare Function Based Approach that could be used in any setting involving fairness and efficiency. In this approach, specific Schur-concave functions called the Ordered Weighted Aggregation (OWA) are used. If the weights of an OWA function are non-increasing and the allocation is ordered in an increasing

(16)

fashion, then OWA will be a Schur-concave function, as it will be explained more in detail in Chapter 3.

OWA functions are defined in the literature; however, to the best of our knowledge, there is no work demonstrating the use of these functions on case studies.

2.2

Inequality Index Based Approaches

In the Inequality Index Based Approach, fairness concerns are incorporated into the model by using inequality indices, I(z) : Rm → R, which assign a scalar value to a given distribution z that shows the degree of inequality [9].

There two ways such an inequality indexing is incorporated into the model. One of them is to include it as another objective in the model and solve the multi-criteria decision making problem. One can give examples from various applications in which efficiency and fairness are handled in such a multi-criteria decision making framework. Ogryczak [7] worked on location problems such that the distance is minimized while treating customers in a fair manner. He used the mean deviation as an inequality index and added it into the objective function next to the efficiency objective and formulated a bi-criteria problem. Ohsawa et al. [13] worked on a bi-criteria facility location problem considering both efficiency and fairness in the objective function. Fairness concern is handled by minimizing the sum of absolute differences between all pairs of squared Euclidean distances from inhabitants to the facility. They used two different measures for the efficiency: one of them is minimizing the sum of squared distances from inhabitants to an attracting facility, while the other one is maximizing that distance from inhabitants to a repellent facility.

Turkcan et al. [8] worked on sequential appointment scheduling with service criteria and proposed different fairness measures such as minimizing the difference between the number of patients in the system at the beginning of each slot and minimizing the difference between the expected waiting time for patients arriving at each slot. They incorporated such measures into the objective function as well. Jang et al. [14] worked on scheduling and routing policies that minimize total travel distance while balancing the workload. They focused on an example in which the lottery sales representatives (LSR) are considered. They used multiple criteria and one of them was about allocating the workload across LSRs in a fair manner, which was performed by penalizing the imbalanced routes in the objective function. Ramos and Oliviera [4] worked on a reverse logistics problem where there were multiple depots. Their work was based on a case study of recyclable waste collection system with five depots covering seven municipalities in southern Portugal. Their aim was minimizing the variable costs, which is a function of the distances traveled by the vehicles, and ensuring

(17)

fair workload allocations of the depots. To this end, they added the objective of minimizing the workload differences among depots to the objective function.

The other way is to incorporate the inequality indexing into the constraints and maxi-mizing an efficiency metric. Chang et al. [2] worked on an optimal time slot and channel allocation problem considering capacity fairness among cells. They maximized the system capacity and incorporated the traffic unbalance into the constraints and came up with a for-mulation that is non-linear. They proposed a procedure to solve this model using simulated annealing. Mclay and Mayorga [15] worked on a problem of dispatching servers to customers in a service system, which is an extension of their former work [16]. Their aim was to effi-ciently dispatch distinguishable servers to prioritized customers and they observed that their former approach may result in inequities. Hence, they wanted to make an extension on their previous work such that the aim is to dispatch servers to customers while satisfying some fairness constraints. They defined four different fairness constraints, two of them were from the customers’ point of view, while the other two were from the servers’ point of view. They added those fairness constraints to their formulation so that minimum levels of allocations to each entity were set in each case.

In the Inequality Index Based Approach, if the inequality index is going to be used in the constraints, then a linear (or a linearizable) function incorporating fairness concerns should be defined. Quantifying fairness concerns is not an easy task itself and being able to do so using a linear (or linearizable) function is even a harder task. However, if such a function can be constructed that is suitable to the problem, then solving the model may be again relatively easy. If the inequality index will be incorporated into the objectives, then a multi-criteria decision making problem will be obtained. Hence, in order to be able to solve the model, techniques to solve multi-criteria decision making problems should be used, which can make the model harder to solve than the other alternatives.

The second approach used in this thesis is similar to the Inequality Index Based Approach in the sense that it incorporates constraints in order to ensure fairness in the allocation. How-ever, different than Inequality Index Based Approach, fairness and efficiency are incorporated together in those constraints. The function used for that manner is an OWA function, rather than an inequality index.

In this thesis, we develop a novel algorithm, called Hybrid Interactive Algorithm. The algorithm interacts with the DM, progresses by getting her preference information and finds an efficient solution that is better than the reference point given by the DM in terms of fairness. The algorithm also notifies the DM, if her standards presented in the reference point are too high.

(18)

Chapter 3

Social Welfare Function Based

Approach

In this chapter, we discuss the first solution approach that we use to ensure fair and effi-cient benefit distributions. The approach we suggest is maximizing a specific Schur-concave function: an Ordered Weighted Averaging (OWA) function. Hence, we will first discuss Schur-concave social welfare functions and explain why they are appropriate for considering both efficiency and fairness in decision making settings. Then we will provide a formal defi-nition of OWA function and explain how we select the weights of the function in a systematic manner to reflect various degrees of inequity-aversion.

3.1

Schur Concave Social Welfare Functions

As aforementioned, one of the ways to address problems having fairness concerns is using an appropriately chosen social welfare function. The chosen function should satisfy a certain set of properties that we will discuss below to ensure that both efficiency and fairness concerns are captured.

Let us call the social welfare function to be maximized f (·) : Rm → R. Let X ∈ Rn to be the decision space and Z ∈ Rm = {z : z = G(x)} to be the feasible outcome space, as defined before. The aim is to maximize a social welfare function f (z) such that z ∈ Z.

For a function to address fairness concerns, it should be able to represent the preferences of an inequity-averse decision maker. We will denote weak preference relation of the Decision Maker (DM) as  (the corresponding preference relations are denoted as ≺ to represent the strict preference relation and ∼ to represent the indifference relation). A typical outcome

(19)

vector is zt=zt

1, z2t, ..., zmt , where zkt represent the outcome value of the entity k ∈ 1, 2, ..., m and t is the index of the alternative [9]. Since the decision maker is inequity-averse, we assume that the following axioms hold for her preference relation: [9], [17], [18].

1)Ref lexivity: z  z, ∀z ∈ Z.

2)T ransitivity: (z1  z2 and z2  z3) =⇒ z1  z3, ∀z1, z2, z3 ∈ Z 3)Strict M onotonicity: z1 < z2 =⇒ z1 ≺ z2, ∀z1, z2 ∈ Z

4)Anonymity: z ∼Qh(z), ∀h = 1, ..., m!, ∀z ∈ Z, whereQh(z) is any permutation of the vector z.

5) P igou − Dalton P rinciple of T ransf ers: zj > zi =⇒ z ≺ z − ej +ei, ∀z ∈ Z, where 0<  < zj − zi, whereei and ej are the ith and jth unit vectors in Rm, respectively.

The first three assumptions are needed to define a rational preference relation. The reflexivity assumption implies that each alternative is as good as itself. The transitivity assumption means that if alternative z2 is preferred to alternative z1 and alternative z3 is preferred to alternative z2, then to be rational, alternative z3 should be preferred to the alternative z1. The strict monotonicity assumption implies the more the better. One can increase the amount received by one of the entities, keeping what the others get the same, and obtain a better allocation. As an example, comparing allocations (100,40,38,55,79) and (101,40,38,55,79), the second alternative is preferred since the first entity in the allocation gets more in the second alternative and none of the other entities get less.

Anonymity and the Pigou-Dalton principle of transfers assumptions are needed in order to incorporate fairness concerns. Anonymity assumption implies that all permutations of an allocation should be treated as the same so that the entities in the system will be in-distinguishable. As an example, allocations (10,15,20) and (15,10,20) should be the same for the DM’s preferences given that entities are indistinguishable. Pigou-Dalton Principle of Transfers implies that transferring benefit from a better-off entity to a worse-off one yields in a more preferred alternative. As an example, consider the alternative having the allocation (100,70,25,40). Giving 30 units from first entity to the fourth entity would yield an allocation of (70,70,25,70). Comparing those two allocations, the latter one will be more preferred as it is a more equitable allocation than the first one.

The preference relations that satisfy the above 5 axioms are called equitable rational preference relations [19]. Subsequently, the definitions of the relations equitable dominance, equitable weak dominance, and equitable indifference are as follows: [9]

(20)

Definition 1: For any two outcome vectorsz1 and z2, z1

e(/ e / ∼e)z2 (z2 equitably dominates/ equitably weakly dominates/ equitably indifferent to z1) iffz1 ≺ (/  / ∼)z2 for all equitable preference relations .

Equitable dominance is called generalized Lorenz dominance as well [9], [20].

There are a number of properties that a function should satisfy in order to be inequity-averse. Firstly, the function should be symmetric in the sense that it respects the anonymity axiom and the entities being indistinguishable. In addition, the function should reflect inequity-aversion and the efficiency-fairness trade-off. Hence, it should also satisfy Pigou-Dalton principle of transfers and strict monotonicity axioms. Such functions are called equitable aggregation functions and are defined as follows [9] :

Definition 2: An equitable aggregation function is a function f (·) : Rm → R, for which the following holds:

1)z1 < z2 =⇒ f (z1)< f (z2), ∀z1, z2 ∈ Z.

2)f (z) =Qh(z), ∀z ∈ Z, where Qh(z) is an arbitrary permutation of the vector z. 3) zj > zi =⇒ f (z) < f (z − ej +ei), ∀z ∈ Z, where 0 <  < zj − zi, where ei and ej are the ith and jth unit vectors in Rm, respectively.

The first property implies the strict monotonicity axiom, the second property implies the symmetry of the function and subsequently addresses anonymity axiom, and the last property addresses the Pigou-Dalton principle of transfers.

All equitable aggregation functions are Schur-concave [17]. Note that if we are allocating a bad instead of a good such that the setting is minimization, then the function to be minimized should be Schur-convex. We will define a Schur-concave function by first giving the definition of the bistochastic matrix [9].

Definition 3: A bistochastic (doubly stochastic) matrix (Q) is a square matrix having all non-negative entries and each row and column of it summing up to 1.

Definition 4: A function f (.) is strictly Schur-concave (Schur-convex) if and only if for all bistochastic matrices Q that are not permutation matrices, f (Qz) > f (z)(f (Qz) < f (z)). concave functions are symmetric by definition. Moreover, there are various Schur-concave function forms such as symmetric quasi-Schur-concave functions, symmetric Schur-concave func-tions etc. Some specific forms of the Ordered Weighted Averaging (OWA) funcfunc-tions are also Schur-concave. Ordered weighted averaging is introduced by Yager [21] as follows:

(21)

Definition 5: A mapping F from Rm → R is called an OWA operator of dimension m if there is a weight vector w associated with F such that wk ∈ (0, 1) and Pmk=1wk = 1, and where F (z1, z2, ..., zm) = w1p1+w2p2+... + wkpk+... + wmpm, wherepk is the kth smallest element in the collection (z1, z2, ..., zm).

Notice that there is no ordering specified for the weight vector in this general OWA definition. The weight w1 is the weight of the smallest element of z. A small example incorporating the OWA functions can be constructed as follows:

Assume that the allocation z is (10,40,30) and the weight vector w is (0.2,0.5,0.3). The ordered vector p is (10,30,40). The OWA function result is the dot product of the vectors w and p such that (0.2)10 + (0.5)30 + (0.3)40 = 29.

As can be seen from the above small example, such a function is symmetric (so it satisfies anonymity) but it does not necessarily satisfy inequity-aversion (Pigou-Dalton principle of transfers). Hence, the definition should be modified so that the OWA function will be inequity-averse [22]. The modification is as follows:

Definition 6: Without loss of generality, assuming a maximization setting, if the out-come vector z is ordered from the lowest to the highest value, −→z , and the weight vector is monotonic satisfying that the weights are strictly decreasing, ←w , the OWA aggregation is− an equitable aggregation function.

Ordering z in an increasing fashion addresses the anonymity axiom, while ordering the weight vectors in a decreasing fashion addresses the Pigou-Dalton principle of transfers [22]. In order to be able to formulate a mathematical model to solve the problem, initially, the weights of the OWA function should be determined as aforementioned. Moreover, as stated by Yager [23], finding the increasing permutation of z leads to non-linearity. Therefore, in the next two sections we will explain the method we used to generate different weights to cover a wide range of possible OWA forms and then the two methods to endogenously order allocation vectors in the models. Finally, corresponding mathematical models will be presented.

3.2

Selecting weights of the OWA function

In order to maximize the OWA function, Pmk=1←w−k−→zk, weights of the DM should be known. There are infinitely-many different choices of weights, as weights differ from a DM to another. In this thesis, we do not assume that these weights are readily available. We will rather explore a wide range of weight vectors and report results for these different choices of weights.

(22)

There are two special cases as two extremes: Utilitarian approach and the Rawlsian approach. In the utilitarian approach, everyone is treated as the same regardless of what they receive; so, the idea is to maximize the total benefit. This approach results in weights being equal to each other. The Rawlsian approach is based on the veil of ignorance philosophy. The aim is to maximize what the worst of entity in the system receives. This approach results in giving all the weight to the worst off entity and giving zero weights to the remaining ones. Between these extremes, there are infinitely many OWA functions, which can be obtained by ranging the weight distribution between these two extremes. Without loss of generality, assume that the number of categories is 5. Figure 3.1 illustrates the relationship between different OWA functions and the weights of the decision maker is as follows:

m

P

m k=1

w

k

1

2

3

4

5

1

U tilitarian

Rawlsian

Figure 3.1: Relationship between weights and OWA functions

The coordinates of the horizontal axis of Figure 3.1 represent entities (ordered in a non-decreasing manner with respect to their allocated amount); i.e., 1 represents the worst off entity, 2 represents the second worst off entity etc. The vertical axis shows the cumulative weights of the entities. wk represents the weight of the kth worst of entity.

As can be seen through the graph, other feasible weight combinations lie between the weight combinations of the Rawlsian approach and the utilitarian approach. In order to come up with different feasible weight combinations, a parametric function is needed, which should be concave with respect to the cumulative weights as shown in Figure 3.1 so that the

(23)

worst-off entity will get the highest weight, the second worst-off entity will get the second highest weight and so on. Therefore, the following formula is suggested: Pmk=1wk = (k/m)l, where m is the total number of categories. Notice that in order this function to be concave, l ∈ [0, 1] should be satisfied. The concavity of the function will be determined by varying l such that whenl= 1, the function will look like a line and representing utilitarian approach, and when l= 0, the function represents the Rawlsian approach.

3.3

Finding ordered permutations of the allocation

vectors

The OWA functionPmk=1←w−k−→z krequires the allocation vectorz to be ordered; however, this allocation vector is unknown in advance, i.e., it is a decision variable vector. Therefore, a methodology to find the ordered permutation of this allocation vector z, call this ordered vector p, should be constructed. There are two methodologies to find the p vector; one of them is called Binary Methodology the other one is called the Continuous Methodology.

3.3.1

Binary Methodology

In the Binary Methodology, additional binary variables (dks) are defined so that the ordering is performed. Let M be the set {1, 2, ..., m}.

ps− zk ≤ M1(1 −dks) ∀k, s ∈ M ps− zk ≥ −M1(1 −dks) ∀k, s ∈ M X s∈M dks= 1 ∀k ∈ M X k∈M dks= 1 ∀s ∈ M ps−1 ≤ ps ∀s ∈ 2, ..., m dks ∈ {0, 1} ∀k, s ∈ M

whereM1 is a sufficiently big number. dks, ∀k, s are auxiliary variables that are used to find the ordered permutation of distribution z, which is the decision variable vector p.

(24)

dks’s are binary variables as follows: dks=         

1, if kth element in the original distribution z is the sth element in the ordered permutation p

0, otherwise

The above 6 set of constraints should be added to the mathematical model so that the ordered permutation of the z vector, p can be found.

3.3.2

Continuous Methodology

In order to get the ordered vectorp, Ogryczak and Sliwinski [22] constructed a methodology that does not involve binary variables. Let σ be the cumulative ordered vector, such that σs =p1+p2+... + ps, ∀s.

Theorem 1: For any given vector z ∈ Rm the cumulated ordered coefficient σ

s can be found as the optimal value of the following problem:

σs = max srs− X k∈M dsk s.t. rs− dsk ≤ zk ∀k, s ∈ M dsk ≥ 0 ∀k, s ∈ M

Proof of Theorem 1: Please refer to [22] for the proof.

In this methodology, the auxiliary continuous variables rs and dsk are used in order to find the ordered permutation vector p. Notice that the σ vector is the cumulated vector of the p vector. Hence, the p vector can be easily achieved once the σ vector is known, i.e., p1 =σ1, andps =σs− σs−1, ∀s ∈ {2, ..., m}. The above set of constraints should be added to the mathematical model and the objective function should be arranged accordingly so that the ordered permutation of the z vector, p can be found.

In addition, the ordering is still guaranteed when the objective function in Theorem 1 is modified as maxPs∈Mws(srs−

P

(25)

3.4

Mathematical Formulations

Both binary and continuous methodologies can be used in order to come up with the math-ematical formulations. Hence, both versions of formulations will be presented.

3.4.1

Binary Model:

max X s∈M wsps s.t. x ∈ X (3.1) zk=gk(x) ∀k ∈ M (3.2) ps− zk ≤ M1(1 −dks) ∀k, s ∈ M (3.3) ps− zk ≥ −M1(1 −dks) ∀k, s ∈ M (3.4) X s∈M dks = 1 ∀k ∈ M (3.5) X k∈M dks= 1 ∀s ∈ M (3.6) ps−1 ≤ ps ∀s ∈ 2, ..., m (3.7) dks∈ {0, 1} ∀k, s ∈ M (3.8)

whereM1 is a sufficiently big number. This model maximizes the OWA function P

s∈Mwsps over all feasible decisions x ∈ X. Constraint set 3.2 relates decisions to the distributions through functions gk(·). The ordered permutation vector p is obtained by using the binary methodology.

(26)

3.4.2

Continuous Model:

max X s∈M wsps s.t. Constraint sets (3.1 − 3.2) rs− dsk ≤ zk ∀k, s ∈ M (3.9) ps =  srs− X k∈M dsk  −  (s − 1)rs−1− X k∈M d(s−1)k  ∀s ∈ M (3.10) dsk ≥ 0 ∀k, s ∈ M (3.11)

The ordered permutation vector p is obtained with the use of auxiliary continuous vari-ables, hence we call this model continuous model.

In this model, we used the ordered vectorp instead of the cumulative ordered vector σ in the objective function. In Theorem 2, we show that using p is equivalent to using σ as long as appropriate weights are used.

Theorem 2: maxPs∈Mwsps : wm ≥ 0, ∀m, and w being non-increasing is equivalent to maxPs∈Mw0

sσs: w0m = wm, and wm0 = wm − wm+1, ∀m ∈ 1, ..., m − 1. Since w is nonincreasing, w0 ≥ 0, so the ordering will be correctly performed.

Proof of Theorem 2: We need to show thatPs∈Mwsps =Ps∈Mw0s(srs−Pk∈Mdsk). We know that at optimality σs = srs− Pk∈Mdsk from Theorem 1. So, Ps∈Mws0(srs − P

k∈Mdsk) = P

s∈Mw

0

sσs. Then, by using the definition ofσ (By definition, σs =p1+p2+ ... + ps) and w0(wm0 =wm− wm+1), we obtain P s∈Mw 0 sσs = (w1− w2)p1+ (w2− w3)(p1+p2) +... + (wm−1− wm)(p1+p2+... + pm−1) + wm(p1 +p2+... + pm−1+pm) =w1p1− w2p1+w2p1+w2p2− w3(p1+p2) +... + wm−1(p1+p2+... + pm−2) +wm−1pm−1− wm(p1 +p2+... + pm−1) +wm(p1+p2+... + pm−1) +wmpm =w1p1+w2p2+... + wm−1pm−1 +wmpm =Ps∈Mwsps.

A small example for Theorem 2 can be constructed as follows: Assume that p vector is (2,3,4) and w vector is (0.6,0.3,0.1). Then, σ vector is (2,5,9) and w0 vector is (0.3,0.2,0.1). P3 s=1wsps = (0.6)2 + (0.3)3 + (0.1)4 = 2.5 = (0.3)2 + (0.2)5 + (0.1)9 = P3 s=1w 0 sσs.

(27)

When the instances become larger, binary model works slower than the continuous model in terms of solution times due to the number of binary decision variables in the model. Hence, in such cases, continuous model works better. However, in settings where the objective is not to maximize an OWA function, but an other function, the continuous model will require an additional term in the objective function to obtain the correct ordering. That is, in order to be able to perform the ordering, Ps∈M(srs −

P

k∈Mdsk) should be added to the objective function by multiplying it with a small number . An upper bound on the  value can be obtained, which will be explained in detail in Chapter 5. However, in some cases, determining  may not be trivial. For instance, if the  value needs to be small due to the problem parameters, then the optimality gap should be set small as well to guarantee optimality, which may cause issues in terms of computation time. In such cases, using binary model may be a better option.

(28)

Chapter 4

Computational Experiments for the

Social Welfare Function Based

Approach

In order to demonstrate the usability of the proposed solution approach, we have used two different case study examples. The first case study we have used is inspired by the work of Eiselt and Marianov [3] and it is related to workload allocation. The other one is a knapsack problem in which we want to allocate resources in an equitable fashion without exceeding the budget and maximizing the benefit. The data of this case study came from the article [1].

In this chapter, firstly the case study of the workload allocation will be presented and its results will be discussed. Then, the knapsack problem will be defined and the solutions for a real-life example will be discussed. Finally, solutions of 60 randomly generated knapsack problem instances will be given.

4.1

Workload Allocation

The first case study is a workload allocation example considered in [3]. The problem is assigning a set of (repetitive) tasks to a set of employees. Each task requires a level of ability denoted with r that the workers who will perform this task should acquire. The level of abilities of workers are denoted with a. Hence, not every employee is capable of performing every task. There is a frequency of the tasks defined so that each task should be performed that many times in order to make the system work smoothly.

(29)

Figure 4.1 illustrates the problem structure:

TASKS

EMPLOYEES

1

2

m

n

3

1

2

3

2

5

7

1

3

1 3 4 1 2 1 3

3

.

.

.

.

Require

Levels in

Abilities

Acquire

Levels in

Abilities

.

.

.

.

.

.

.

.

.

.

.

.

Figure 4.1: Workload Allocation

In this figure, a small example of the workload allocation problem is visualized. There are n tasks and m employees and the frequencies of the tasks (the number of times that the task should be performed) are written inside of the circles. Each task should be performed as much as it is required. And the tasks require levels in different ability types that the employees should acquire in order to be able to perform that particular task.

Eiselt and Marianov [3] aim to minimize the total cost (overtime and subcontracting cost), total boredom and inequality of workload allocation. They solve this multiple criteria decision making problem by using a weighted aggregation of the objective functions.

In the case study that we used, we modified the problem so that the aim is finding an efficient and an equitable allocation of the workload among indistinguishable employees. In the article, the times that tasks require are taken as constant for all employees; however, this assumption would not be realistic, since the required time of performing a task may depend on the worker it is assigned to. To make the problem more realistic, we assumed that the duration of a task varies among different workers. We do not take cost or boredom into account.

(30)

This problem is formulated adopting both binary and continuous methodology; however, due to the size of the problem, binary formulation could not be solved in a reasonable amount of time. Thus, the continuous formulation is preferred. The continuous formulation for the workload allocation case study is as follows:

Sets: N : set of tasks = {1, 2, ..., n} M : set of employees = {1, 2, ..., m} J: set of abilities = {1, 2, ..., h} Parameters: yik =   

1, if kth employee can perform task i 0, otherwise

fi: frequency that task i should be performed, ∀i ∈ N tik: duration of taski for employee k, ∀i ∈ N, ∀k ∈ M ws: weight of thesth worst off category , ∀s ∈ M M1, M2: sufficiently big numbers

Decision variables:

xik: the number of times taski is assigned to employee k, ∀i ∈ N, ∀k ∈ M zk: total workload of employee k, ∀k ∈ M

¯

zk: auxiliary variable for total workload of employee k, ∀k ∈ M

ps: sth element in the ordered permutation of workload allocations, ∀s ∈ M

An ability j acquired by employee k is denoted as akj, and the ability j that the task i requires is denoted as rij. Thus, the matrix yik is obtained in a way that if akj value of employee k is greater than equal to rij for task i ∀j, then the worker k is capable of performing task i; hence, yik gets the value of 1.

The required time tik to perform task i by employee k is obtained by using a formula including Chebyshev’s distance function: 10/maxj{akj− rij, 0}



(31)

course possible to use different distance functions in order to calculate the required time. The reasoning behind the selection of this formula is as follows: The worker will have less difficulty as the difference between his ability level and the required ability level to perform the task increases and hence; the duration of the task will be small for that employee. The values are rounded up for simplicity, since we have a mixed-integer linear program.

Let us give a small example on how to determine the values of the t matrix and y matrix givena and r. Assume that we have two employees, five tasks and three skills (ability levels) as follows:

Table 4.1: Ability Levels that Employees Acquire

Level in ability 1 Level in ability 2 Level in ability 3

Employee 1 2 2 5

Employee 2 1 4 3

Table 4.2: Ability Levels that Tasks Require

Level in ability 1 Level in ability 2 Level in ability 3

Task 1 2 1 4

Task 2 1 3 2

Task 3 1 3 3

Task 4 1 2 2

Task 5 1 4 3

The values in table 4.1 correspond to the values of the vectora, while the values in Table 4.2 correspond to the values of the vector r. We know that an employee can perform a task if and only if she has acquired values of levels in abilities greater than or equal to all of the required values of levels in abilities for that task. Thus, according to our small example, Table 4.3 can be constructed:

Table 4.3: Relationship between Tasks and Employees Can be performed by Task 1 Employee 1 Task 2 Employee 2 Task 3 Employee 2 Task 4 Both Task 5 Employee 2

Employee 2 cannot perform task 1 since the acquired level of employee 2 in ability 3 (3) is less than the required level of task 1 in ability 3 (4). A similar analysis is performed for the rest of the tasks and the findings in Table 4.3 is used to construct the y matrix for this small problem. For the construction of the t matrix, we need to find the maximum distances be-tween acquired and required levels in abilities. Let’s consider task 4. It can be performed by both employees. Thet value for task 4 for employee 1 is10/max{2 − 1, 2 − 2, 5 − 2, 0}= 4 and for employee 2, it is 10/max{1 − 1, 4 − 2, 3 − 2, 0}= 5, which means employees 1 and 2 will perform task 4 in 4 and 5 time units, respectively.

(32)

Thet values used in order to solve the case study we constructed are shown in the Table 4.4:

Table 4.4: Time Units Required by Employees to Perform Tasks EMPLOYEES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 T ASKS 1 2 2 2 2 2 2 4 4 4 4 4 5 5 5 5 2 2 2 2 2 2 2 4 4 4 4 4 3 3 4 4 3 2 2 2 2 3 3 4 4 4 4 4 5 5 5 5 4 2 2 2 2 2 2 4 4 4 4 4 5 5 5 5 5 2 2 2 2 2 2 4 4 4 4 4 5 5 5 5 6 2 2 2 2 2 2 4 4 5 5 5 3 3 4 4 7 2 2 2 2 2 2 4 4 5 5 5 3 3 4 4 8 2 2 2 2 2 2 4 4 5 5 5 3 3 4 4 9 2 2 2 2 2 2 4 4 5 5 5 3 3 4 4 10 2 2 2 2 3 3 4 4 4 4 4 3 3 4 4 11 2 2 2 3 4 4 4 4 4 4 4 5 5 5 5 12 2 2 2 3 4 4 4 4 4 4 4 5 5 5 5 13 2 2 2 2 3 3 4 4 4 4 4 3 3 4 4 14 2 2 2 2 3 3 4 4 4 4 4 3 3 4 4 15 2 2 2 2 2 2 4 4 4 4 4 4 4 5 5 16 2 2 2 2 3 3 3 3 3 3 3 4 4 4 4 17 2 2 2 2 2 2 3 3 3 3 3 4 4 4 4 18 2 2 2 2 2 2 4 4 4 4 4 3 3 4 4 19 2 2 2 2 2 2 4 4 4 4 4 3 3 4 4 20 2 2 2 2 2 2 4 4 4 4 4 5 5 5 5 21 2 2 2 2 2 2 4 4 4 4 4 5 5 5 5 22 2 2 2 2 2 2 4 4 4 4 4 5 5 5 5

The other parameters of the model were directly taken from the article.

The mathematical formulation of the workload allocation problem by adopting continuous methodology is as follows:

(33)

max X s∈M wsps s.t. X k∈M xik =fi ∀i ∈ N (4.1) zk= X i∈N tikxik ∀k ∈ M (4.2) ¯ zk=M1− zk ∀k ∈ M (4.3) rs− dsk ≤ ¯zk ∀k, s ∈ M (4.4) ps =  srs− X k∈M dsk  −  (s − 1)rs−1− X k∈M d(s−1)k  ∀s ∈ M (4.5) xik ≤ M2yik ∀i ∈ N, ∀k ∈ M (4.6) dsk, zk, ¯zk, ps ≥ 0 ∀k, s ∈ M (4.7)

Constraint 4.1 ensures that each task is performed at its required frequency. Constraint 4.2 calculates the total workload of each employee by multiplying the time it takes that employee to perform the assigned task by the number of times she needs to perform that task. Constraints 4.3, 4.4 and 4.5 ensure that the ordered permutation of the workload allocation is obtained by using continuous variables. Notice that in this case study, we need an auxiliary variable ¯zk in order to do such an ordering. This is due to the fact that the allocation made here is not an allocation of a good, but a bad, i.e., assigning less amount of workload is the favorable allocation. According to the structure of the OWA function that we use, the weights should be given in such a way that highest weight should be given to the worst off entity and the ordering of the allocation vector should go from the worst off entity to the best off entity. In the case of workload allocation, this requires the allocation vector z to be ordered in a decreasing fashion. The continuous methodology provided in Chapter 3 under the section Mathematical Formulation results in an increasing order of the allocation vector z. Therefore, the need of an auxiliary variable ¯zk arises so that if the value of allocation z is subtracted from a sufficiently big number and make it equal to that auxiliary variable, the ordering would be in a decreasing fashion of the vector z. Constraint 4.6 ensures that each task is assigned to a worker that is capable of performing the task. The set of constraints 4.7 are non-negativity constraints.

We solve a problem instance with 15 employees and 22 tasks to be performed in the system. There are 14 different abilities defined. We set M1 =

P i∈N P k∈Mfitki, and M2 = P i∈Nfi. The model is coded in GAMS and solved quite fast by using CPLEX 12.6.3 on a dual core (Intel Core i7 2.40GHz) computer with 8 GB RAM for different weight combinations. The central processing unit (CPU) times are always under 2 seconds. Recall that we generate

(34)

the weight vector of the OWA function based onPmk=1wk= (k/m)l. In the allocation of the weight vector, the power l is taken between 0 and 1 with the increments of 0.1. The graph showing the total workload of all employees versus the l value taken is as follows:

150 160 170 180 190 200 210 220 230 240 250 0 0.2 0.4 0.6 0.8 1 To ta l W or kl oa d l Total Workload vs l

Figure 4.2: Total Workload versus l

In Figure 4.2, the trade-off between efficiency and fairness can be observed. Efficiency here corresponds to the total workload. When the total workload is getting smaller, the efficiency increases. In the case where l=1, the utilitarian case is considered, hence, the lowest total workload value, i.e., the most efficient solution is obtained. As the l value decreases, the total workload value increases; hence, the allocation is becoming less efficient. However, at the same time, it becomes more fair. This demonstrates the trade-off between efficiency and fairness. Whenl=0, the Rawlsian function is used, hence the resulting allocation is the least efficient one.

To illustrate how fairness changes as the OWA function changes (which is controlled by the parameterl), one can draw the Lorenz Curves for different l values. In income economics the Lorenz curve is a cumulative population versus cumulative income curve [17]. Thex-axis of the Lorenz Curve represents the worst-off x-percent of the population and the y-axis of the curve shows what percent of the total outcome is received by each population percent x. The 45-degree line is interpreted as the most fair allocation as it gives the same amount to every entity in the system. The graph indicating the Lorenz Curves for some example values of l for the workload allocation problem is as follows:

(35)

0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 % C um ul at iv e W or kl oa d

Population % (ordered wrt workload)

l=0 l=0.6 l=1

Figure 4.3: Lorenz Curves for the Workload Allocation

It is seen in Figure 4.3 that when l=1, the corresponding Lorenz curve is farthest away from the 45-degree line. As l decreases (as the corresponding OWA function becomes more inequity-averse), the curves are getting much closer to the 45-degree line, indicating that the allocation becomes more fair.

4.2

Knapsack Problem

The second case study that we will discuss is a knapsack setting, in which a decision maker is trying to choose which projects to invest in subject to a limited budget.

In the classical knapsack problem, the aim is to maximize benefit within the available budget. Letc be the cost vector, b be the benefit vector and BU be the available budget. N is the set of projects andx is a binary decision variable that incorporates whether an item is included in the knapsack or not. The generic formulation of the classical knapsack problem is as follows:

(36)

max X i∈N bixi s.t. X i∈N cixi ≤ BU xi ∈ {0, 1} ∀i ∈ N

Different from the classical knapsack setting, we assume that the projects belong to dif-ferent categories and the aim is to be fair among those categories. Let us construct a small example:

Table 4.5: Small Example for Categorized Knapsack Cost Benefit Category

Pro jects 1 7 4 1 2 5 6 2 3 8 8 3 4 10 6 1 5 7 9 2 6 5 2 2 7 2 4 1 8 9 5 3 9 3 8 2 10 9 9 1

From Table 4.5, it can be seen that there are 10 projects and 3 categories that those projects belong. Assume that the total available budget is 30. If we decide on investing in the first 4 projects, then the benefit allocation vector that we will get with respect to the categories would be (17,5,8). But if we invest in projects 5,6,8, and 10, we would be getting the benefit allocation as (9,12,9). Even though both allocations have the same total benefit, the latter one distributes those benefits in a more fair manner than the former one.

Thus, the decision has to be made by the decision maker is which projects to fund within a limited budget so that both efficiency (maximizing total benefit) and fairness (allocating the benefit across categories in an equitable manner) are considered. For this case study, the binary methodology is acquired as the size of the problem is not extremely large and the resulting models can be solved in reasonable time. The binary formulation of the knapsack problem is as follows:

(37)

Sets: N : set of projects = {1, 2, ..., n} M : set of categories = {1, 2, ..., m} Parameters: gik =   

1, if ith project belongs to category k 0, otherwise

ci: cost of initiating project i, ∀i ∈ N

bi: the benefit that will be received from project i, ∀i ∈ N ws: weight of thesth worst off category, ∀s ∈ M

BU : Budget

M1: sufficiently big number Decision variables: xi =    1, if project i is funded 0, otherwise dks=   

1, if kth element in z is the sth element in p 0, otherwise

zk: total benefit of category k, ∀k ∈ M

(38)

max X s∈M wsps s.t. X i∈N cixi ≤ BU (4.8) zk= X i∈N bigikxi ∀k ∈ M (4.9) ps− zk ≤ M1(1 −dks) ∀k, s ∈ M (4.10) ps− zk ≥ −M1(1 −dks) ∀k, s ∈ M (4.11) X s∈M dks = 1 ∀k ∈ M (4.12) X k∈M dks = 1 ∀s ∈ M (4.13) ps−1 ≤ ps ∀s ∈ 2, ..., m (4.14) dks, xi ∈ {0, 1} ∀i ∈ N, ∀s, k ∈ M (4.15)

Constraint 4.8 ensures that the total cost of the allocation does not exceed the avail-able budget. Constraint 4.9 assigns the benefits of chosen project to their corresponding categories. Hence, the total benefit allocation of the categories, to which the decisions of im-plementing projects lead, will be determined. The constraints 4.10, 4.11,4.12, 4.13 and 4.14 ensure that the ordered permutation of the benefit allocation is obtained by using binary variables. Constraint 4.15 is the domain constraint ensuring that the variables x and d can only take values 0 and 1. M1 could be set as M1 =

P i∈Nbi.

We will first discuss a small real-life example and then provide results on the 60 random-ized instances that we used to assess our model.

4.2.1

Real-life example

There are 39 projects, each of them belonging to 1 of 3 different categories. Please refer to [1] for the parameters of the model. The problem is again coded in GAMS and solved by using CPLEX 12.6.3 on a dual core (Intel Core i7 2.40GHz) computer with 8 GB RAM for different weight combinations. The CPU times are always around 2 seconds. When generating the weight vector, the power l is taken between 0 and 1 with the increments of 0.1. The graph showing the total benefit of all initiated projects versus the l value taken is as follows:

(39)

40 45 50 55 60 65 0 0.2 0.4 0.6 0.8 1 To ta l B en ef it l Total Benefit vs l

Figure 4.4: Total Benefit versus l for the Real-life Knapsack Problem

In Figure 4.4, the trade-off between efficiency and fairness can be observed. For this problem, efficiency corresponds to the total benefit, i.e., the more benefit we get the more efficient we become. In the case where l=1, the utilitarian case is considered, hence, the highest total benefit value is obtained. As the l value decreases, the value of the total benefit decreases, as expected.

To illustrate the fairness concerns more clearly, let us again draw the Lorenz Curves. The 45-degree line is again interpreted as the most fair allocation as it gives the same amount to every entity in the system. Figure 4.5 shows the Lorenz Curves for some values of l for the real-life knapsack problem.

(40)

0 10 20 30 40 50 60 70 80 90 100 0 0.2 0.4 0.6 0.8 1 % C um ul at iv e Be ne fit

Population % (ordered wrt benefit)

l=0 l=0.7 l=1

Figure 4.5: Lorenz Curves for the Real-life Knapsack Problem

Notice the difference of the shapes of the Lorenz Curves between Figures 4.3 and 4.5. In the Figure 4.3, the shapes of the Lorenz Curves are concave. This is because in the case of workload allocation, we are allocating something bad instead of a good ; hence, the less is better for us. In the case of the knapsack problem; however, the shape of the Lorenz Curves are convex. Since we are allocating a good, the more benefit we get is better for us.

4.2.2

Randomly Generated Knapsack Problems

In order to perform a more extensive analysis on larger problem instances, we solve 60 randomly generated knapsack instances. In the data that we use, benefit and cost values were randomly generated integers between 10-100 and budget is assumed to be (0.5)Pi∈Nci. The number of projects n ranged between 50-150 in increments of 50 and the number of categories, m were 3 and 5. For each pair of n and m, 10 different randomly generated instances were created.

The problem is coded in Visual C++ and solved by using CPLEX 12.6.3 on a dual core (Intel Core i7 2.40GHz) computer with 8 GB RAM for different weight combinations. The l parameter used for generating weight combinations is again ranged between 0 and 1 with the increments of 0.1.

(41)

In Table 4.6 the results of these experiments are summarized. For each n and m combi-nation, the worst and the average values for the required CPU times to solve the problem instances and the number of Pareto solutions found are reported. Note that the worst result for the solution times and the number Pareto solutions found are the maximum value and the minimum value, respectively.

Table 4.6: Results for the Randomized Knapsack CPU Times in seconds Number of Pareto

Solutions Found

n m Average Maximum Average Minimum

50 3 1.952 2.875 4.8 1 5 4.961 17.172 5.9 5 100 3 4.258 8.516 6.1 4 5 3.656 7.010 7.6 5 150 3 8.352 39.031 7.9 5 5 6.804 15.001 8.3 7

As can be seen through the Table 4.6, the average and maximum CPU times usually tend to increase as the number of projects increases. The highest CPU times required is around 39 seconds, which is quite fast for a combinatorial problem. It is also observed that the number of Pareto solutions increases as both the number of projects and the number of categories increase. The minimum number of Pareto solutions found is 1. Notice that this means for all the feasible weight combinations there exists a single solution. Referring to Figure 3.1, the two extremes are the utilitarian approach and the Rawlsian approach. In the analysis we checked the range of l between 0 (corresponds to Rawlsian) and 1 (corresponds to utilitarian) as the two extremes.

The Lorenz Curves and the graphs showing how the total benefit values change for dif-ferent weight combinations for two example instances, one having the number of projects, n=150 and number of categories, m=5, the other having the number of projects, n=50 and number of categories, m=3, can be found in the Appendix A.

(42)

Chapter 5

Ordered Weighted Averaging (OWA)

Dominance Based Approach

In this chapter, the second approach to the fair resource allocation problem will be discussed. The problem is again finding a solution that is efficient and fair. However, this time our overall aim would be maximizing efficiency, and fairness concerns will be handled in the constraints.

This approach is similar to inequality index based approaches in the sense that it incorpo-rates constraints in order to ensure fairness in the allocation. In addition, different from the method discussed in Chapters 3 and 4, this method assumes that the decision maker (DM) will provide a reference point so that the solution found should be better than this reference point in terms of fairness.

In this chapter, firstly the general structure of the problem will be introduced. Then, two interactive algorithms which aim to solve the problem will be introduced. Finally, a hybrid algorithm to solve the problem will be given, which is a combination of the two algorithms.

5.1

General Structure of the OWA Dominance Based

Approach with a Reference Point

The overall aim of the problem is to maximize efficiency while having a fair allocation. We also have a slight change in the problem setting: we assume that the decision maker (DM) has provided a reference allocation and she wants the proposed solution to be better than this reference allocation in terms of fairness. Hence, she is trying to find the most efficient solution among the ones that are better than the reference allocation in terms of fairness.

(43)

We have defined OWA aggregations in the previous chapters and showed that it is one of the many ways that can be used in order to ensure fairness of the allocations. In this approach, rather than optimizing an OWA function, an efficiency function will be maximized and OWA aggregations will be incorporated into the model via the constraints in order to ensure a fair allocation.

We assume that the DM has provided a reference allocation vector, which is denoted as a. The decision maker wants the proposed allocation z to be better than a, i.e., a DM z. In order to mathematically represent the preference relation of the DM, DM, we will use again the OWA aggregations as follows: OW ADM(a) ≤ OW ADM(z) ⇐⇒ a DM z, where OW ADM is the underlying OWA aggregation function of the DM, which is unknown.

We will call this approach OWA dominance based approach (due to the use of DM) with reference point (a). The general structure of the model used in this approach will be as follows.

M aximize f (x) subject to x ∈ X

z = G(x)

OW A(z)DM ≥ OW ADM(a)

As before, x ∈ Rn is the decision variable vector, X is the generic feasible region and z ∈ Rm is the resulting allocation vector. In this model, the objective is maximizing f (·) : Rn → R, which is a function of efficiency and the last constraint ensures that for an allocation to be feasible it has to be better than the reference allocation for the DM.

Similar to the models discussed in the previous chapters, this model uses an OWA aggre-gation; hence, the ordering of the allocation vectorsz and a must be performed. Ordering of the reference vectora can be performed quite easily since it is a parameter to the mathemat-ical model. Therefore, from now on we assume that a is ordered. Ordering of the allocation vectorz will be performed by using the Continuous Methodology, as it works faster than the Binary Methodology.

Note that we assume that the exact form of the OW ADM function (i.e. the weights of the DM’s OWA function) are unknown. Instead of parametrically assigning weights; we will leave them as decision variables. Then, the mathematical formulation of the main model is as follows:

Referanslar

Benzer Belgeler

Training and development are one of the most essential part of human resources management and people. Training refers to a planned effort by a company to facilitate employees'

A Conceptual Model Proposal for the HRM Which is the Most Critical Risk Factor in Aviation: A Swot-Based Approach, International Journal Of Eurasia Social Sciences,

History as a “contributor” play an important role to make interrelationship between past, present and future brought about by an interpretation of the experience of the

In Section 3.1 the SIR model with delay is constructed, then equilibrium points, basic reproduction number and stability analysis are given for this model.. In Section

He firmly believed t h a t unless European education is not attached with traditional education, the overall aims and objectives of education will be incomplete.. In Sir

The turning range of the indicator to be selected must include the vertical region of the titration curve, not the horizontal region.. Thus, the color change

The developed system is Graphical User Interface ( MENU type), where a user can load new speech signals to the database, select and play a speech signal, display

The developed system provides services for school, students, and parents by making communicat ion among school (teacher), parent and student easier, and the user