• Keine Ergebnisse gefunden

In this section, we introduce basic concepts in linear programming, specifically duality based on [65] comprehending Chapter 7. The readers can explore [65] for an in-depth explanation of linear programming.

cixi, also called aslinear cost functionsubject to a set of linear equality and inequality constraints. Then, the corresponding linear program is given

minimize

In the above Eq. (2.1.1),F1,F2, andF3are some finite sets used to construct a constraint based on the n-dimensional vectoraaaiiiand a scalarbi. Similarly, the setsB1andB2constrains the variablesxj as either non-negative or non-positive. The variablesx1, . . .xnare popularly known asdecision variables. The vectorxxxsatisfying all the constraints of Eq. (2.1.1) is called as afeasible solutionorfeasible vector.

There can be multiple feasible vectors for a linear program. Letxxx?be the feasible vector that minimizes the objective function, i.e.,ccc|·xxx?≤ccc|·xxxfor all feasiblexxx. Then,xxx?is called as anoptimal feasible solutionoroptimal solution.

Suppose, there aremconstraints indexed byi. Letbbb= (b1, . . . ,bn)and letAAAbe an×m

We can represent above constraints as AAAxxx=bbb. Then, Eq. (2.1.1) can be rewritten as minimize

x ccc|·xxx subject to AAAxxx=bbb,

xxx≥0.

(2.1.2)

The linear program of Eq. (2.1.1) is said to be in thestandard form(Eq. (2.1.2)). The procedure to convert the generic form to standard form can be found in [65].

We can solve linear programs in multiple ways. However, duality theory is widely used.

Duality theory is based on the Lagrange multiplier method used in calculus [66] to minimize a function with equality constraints.

Consider an example function as given [65]

minimize

x,y x2+y2 subject to x+y=1.

(2.1.3)

In Lagrange multiplier method, Eq. (2.1.3) is first transformed to an unconstrained min-imization problem by introducing a Lagrange multiplier for each constraint. Let pbe the Lagrange multiplier for constraintx+y=1. The LagrangeL(x,y,p)of Eq. (2.1.3) is given by

L(x,y,p) =x2+y2+p(1−x−y) (2.1.4) The Lagrangean L is also called as Lagrange dual is minimized over x and y while keeping pfixed i.e., ∂L

∂x =0 and ∂L

∂y =0. The LagrangeanL achieve optimality when x=y= p

2. The constraint of Eq. (2.1.3) implies thatp=1. Hence, the optimal solution of Eq. (2.1.3) isx=y=1

2.

The Lagrange multiplier method can be summarized as follows: we associate a Lagrange multiplier for each constraint which is called popularly aspriceand allow constraint viola-tion by converting to an unconstrained optimizaviola-tion problem. We find prices such that the optimal solution of both constrained and unconstrained optimization problems are equal. In other words, we find the prices such that constraints do not affect the optimal cost.

Linear programming uses the Lagrange multiplier method. Hence, associates price for each constraint. The goal is to find prices such that constraints do not affect the optimal cost

23 2.1 Optimization theory that eventually leads to a new linear program calleddualand the original linear program calledprimal.

The dual of the standard linear program (Eq. (2.1.2)) is given by maximize

x ppp|·bbb subject to ppp|AAA≤ccc|.

(2.1.5)

The dual of a minimization problem is a maximization problem and vice versa. The procedure of finding dual can be found in [65].

Weak and strong duality theorems describe the relationship between objective functions of primal and dual. The proofs can be found in [65].

Theorem 2.1 (Weak duality) Ifxxxandpppare feasible solutions ofprimalanddual respec-tively in standard form, thenppp|bbb≤ccc|xxx

Theorem 2.2 (Strong duality) If a primal has an optimal solution, and the corresponding dual also has an optimal solution then the respective optimal costs are equal.

Theorem 2.1 and Theorem 2.2 is known as weak andstrong duality theorems respec-tively. The weak duality theorem implies that cost of primal and dual are bound to each other. Conversely, strong duality theorem implies that the cost of primal and dual is equal at optimality. Furthermore, this leads to an important relationship between primal and dual optimal solution calledcomplementary slacknessas defined in the following theorem:

Theorem 2.3 (Complementary slackness) If xxx and ppp are feasible solutions ofprimal anddualrespectively in standard form. Ifxxxandpppareoptimalsolutions only iff:

pi(a|i −bi) =0,∀i (cj−ppp|AAAj)xj=0,∀j

Theorem 2.3 is the cornerstone for designing primal-dual approximation algorithms [67].

2.1.2 Karush-Kuhn-TuckerConditions

In the previous section, Lagrange multiplier method is applied to a linear program with equality constraints. Karush-Kuhn-Tucker (KKT) conditions extend Lagrange multiplier

method for a generic function with inequality constraints. We briefly introduce KKT con-ditions required for comprehending Section 2.2. The readers can refer [68] for an in-depth discussion on both convex programming duality and KKT conditions.

Consider a generic maximization function

Letαi be the Lagrangean multiplier associated with the equality constrainthi(x)in Eq.

(2.1.6). Similarly, µj be the Lagrangean multiplier associated with inequality constraint gj(x). TheLagrangean dualL is given by

L(x,α,µ) = f(x)− Let ∇x be the gradient operator of the function i.e., ∇xf(x) represents the gradient of function f(x) with respect tox. Letx? be the optimal solution of Eq. (2.1.6), then KKT conditions are:

The unique characteristic of microeconomic theory is the ability to model economic activity as an interaction of economic agents with private information [21]. Individual decision mak-ing is not only a significant concept but also basis for most of the analysis in microeconomic theory. In individual decision making, the goal is to choose outcomes. The decision-making model may be eitherpreference basedorchoice baseddepending on the relationship among the outcomes. In preference based models, preference relation exists among the outcomes based on the rationality axioms; i.e., the decision maker prefers some outcome over the other. In choice-based models, the decision maker makes choice a based on the consistency