HW 3¶
ISE-530 Homework III: Chapters V and VI of Cottle-Thapa. Due 11:59 PM Wednesday September 30, 2020
Exercises 5.4, 5.6, 5.8, and 5.9.
Exercises 6.2, 6.5, 6.8, 6.13, and 6.16.
%load_ext nb_black
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.dpi"] = 300
import numpy as np
import cvxpy as cp
from collections import namedtuple
Chapter 5¶
5.4¶
Find the dual of the following linear program
Find the dual of the dual and thereby show that the dual of the dual is the primal.
Dual
Dual of Dual
\(\therefore\) Dual of the Dual is the primal.
5.6¶
Consider the linear program
in which
Use the optimality criteria (see page 127) to decide whether or not \(\bar{x} = (2,0,5)\) is an optimal solution of the LP. [Hint: Assume that \(\bar{x}\) is an optimal solution. Use the necessary conditions of optimality to determine what the vector \(\bar{y}\) would have to be. Check to make sure that all the conditions of optimality are met.]
Dual
A = np.array([[3, -1, 2], [4, 5, 0], [-6, 9, 5], [1, 1, 1]])
b = np.array([15, 8, 13, 4])
c = np.array([2, 3, 5])
x_bar = np.array([2, 0, 5])
# Create one vector optimization variable.
x = cp.Variable((3,), integer=False)
# Create constraints.
constraints = [
A @ x >= b,
x >= 0,
]
# Form objective.
obj = cp.Minimize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 27.59
The optimal solution is: x = [1.89, 0.09, 4.71]
We will recover the Dual Optimal solution \(\bar{y}\) from Primal Optimal Solution \(\bar{x}\) using the Complementary Slackness Theorem:
A @ x_bar
array([16, 8, 13, 7])
b
array([15, 8, 13, 4])
\(\therefore\) primal feasibility.
Adding the primal slack variables \(s_j, j \in [1,2,3,4]\) into the optimal primal solution \(\bar{x}\):
By Complementary Slackness, we know that the optimal dual solution with dual slack variables \(t_k, k \in [1,2,3]\):
Plugging values into the dual constraints:
\(\therefore\) Dual Optimal Solution \(\bar{y}\):
y_bar = np.array([0, 2, 1, 0])
A vector \(\bar{x}\) is optimal iff \(\exists \bar{y}\) such that the following 4 optimality conditions hold:
Primal Feasibility \(A\bar{x} \geq b, \bar{x} \geq 0\)¶
print(
f"Primal Feasible!"
if np.alltrue(A @ x_bar >= b) & np.alltrue(x_bar >= 0)
else "Primal Infeasible!"
)
Primal Feasible!
Dual Feasibility \(\bar{y}^\top A \leq c^\top, \bar{y} \geq 0\)¶
print(
f"Dual Feasible!"
if np.alltrue(y_bar.T @ A <= c.T) & np.alltrue(y_bar >= 0)
else "Dual Infeasible!"
)
Dual Infeasible!
Complementary Slackness 1: \(\bar{y}^\top (A\bar{x} - b) = 0 \because\) if primal slack variables \(\not= 0\), dual variables \(= 0\), vice versa¶
print(
f"1st Complementary Slackness Passed!"
if y_bar.T @ (A @ x_bar - b) == 0
else f"1st Complementary Slackness Failed!"
)
1st Complementary Slackness Passed!
Complementary Slackness 2: \((\bar{y}^\top A - c^\top)\bar{x} = 0 \because\) if dual slack variables \(\not= 0\), primal variables \(= 0\), vice versa¶
print(
f"2nd Complementary Slackness Passed!"
if (y_bar.T @ A - c.T) @ x_bar == 0
else f"2nd Complementary Slackness Failed!"
)
2nd Complementary Slackness Passed!
Since the dual feasibility optimality condition failed, \(\bar{x}\) is not an optimal solution.
5.8¶
Consider the linear program
Dual
A = np.array(
[
[0, -1, -1, 1, 0, 0, 0],
[-1, -1, 0, 0, 1, 0, 0],
[-1, -1, -1, 0, 0, 1, 0],
[-1, 0, -1, 0, 0, 0, 1],
]
)
b = np.array([-1, -1, -1, -1])
c = np.array([37, 29, 33, 0, 0, 0, 0])
# Create one vector optimization variable.
x = cp.Variable((7,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Minimize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 49.5
The optimal solution is: x = [0.5, 0.5, 0.5, -0.0, 0.0, 0.5, -0.0]
\({c_B}^\top B^{-1} N - c_N\)
(c[[0, 1, 2, 5]].T @ np.linalg.inv(A[:, [0, 1, 2, 5]]) @ A[:, [3, 4, 6]]) - c[[3, 4, 6]]
array([-12.5, -16.5, -20.5])
Since reduced cost coefficients are <= 0 for minimization problem, solution is optimal.
\(B^{-1}N\)
np.linalg.inv(A[:, [0, 1, 2, 5]]) @ A[:, [3, 4, 6]]
array([[ 0.5, -0.5, -0.5],
[-0.5, -0.5, 0.5],
[-0.5, 0.5, -0.5],
[-0.5, -0.5, -0.5]])
\(B^{-1}b\)
np.linalg.inv(A[:, [0, 1, 2, 5]]) @ b
array([0.5, 0.5, 0.5, 0.5])
\({c_B}^\top B^{-1}b\)
c[[0, 1, 2, 5]].T @ np.linalg.inv(A[:, [0, 1, 2, 5]]) @ b
49.5
(a) Use the Dual Simplex Algorithm to solve this LP.
To use the Dual Simplex method, we have to fulfill the
Optimality condition (all reduced cost coefficients \(<= 0\) for a minimization problem) and
Infeasibility condition (RHS of constraints \(< 0\))
Simplex Tableau:
Since both conditions for Dual Simplex are satisfied, we can proceed.
Few more pivots and max ratio tests, when RHS \(\geq 0\) [FEASIBLE] and reduced cost coefficients <= 0 [OPTIMAL], the method terminates.
(b) Write down the optimal primal solution and the optimal dual solution.
Optimal Primal Solution: \((x_1=0.5, x_2=0.5, x_3=0.5, s_1=0, s_2=0, s_3=0.5, s_4=0)\)
With \(t_k, k\in [1,2,3]\) are dual’s slack variables, Optimal Dual Solution: \((y_1=12.5, y_2=16.5, y_3=0, y_4=20.5, t_1=0, t_2=0, t_3=0)\)
(c) Verify that the complementary slackness conditions hold.
Complementary Slackness 1: \(\bar{y}^\top (A\bar{x} - b) = 0 \because\) if primal slack variables \(\not= 0\), dual variables \(= 0\), vice versa¶
x_bar = np.array([0.5, 0.5, 0.5, 0, 0, 0.5, 0])
y_bar = np.array([12.5, 16.5, 0, 20.5, 0, 0, 0])
print(
f"1st Complementary Slackness Passed!"
if y_bar[[0, 1, 2, 3]].T @ (-A[:, [0, 1, 2]] @ x_bar[[0, 1, 2]] - (-b)) == 0
else f"1st Complementary Slackness Failed!"
)
1st Complementary Slackness Passed!
Complementary Slackness 2: \((\bar{y}^\top A - c^\top)\bar{x} = 0 \because\) if dual slack variables \(\not= 0\), primal variables \(= 0\), vice versa¶
print(
f"2nd Complementary Slackness Passed!"
if (y_bar[[0, 1, 2, 3]].T @ -A[:, [0, 1, 2]] - c[[0, 1, 2]].T) @ x_bar[[0, 1, 2]]
== 0
else f"2nd Complementary Slackness Failed!"
)
2nd Complementary Slackness Passed!
5.9¶
Solve the following LP by the Dual Simplex Algorithm. Explain why it is advisable to solve such LPs by the Dual Simplex Algorithm.
Write down the optimal primal solution and the optimal dual solution.
Dual
A = np.array(
[
[-2, -4, 1, 0, 0, 0, 0],
[-4, 1, 0, 1, 0, 0, 0],
[2, -5, 0, 0, 1, 0, 0],
[-5, -1, 0, 0, 0, 1, 0],
[-3, -6, 0, 0, 0, 0, 1],
]
)
b = np.array([-4, -2, -5, -10, -9])
c = np.array([1, 2, 0, 0, 0, 0, 0])
# Create one vector optimization variable.
x = cp.Variable((7,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Minimize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 5.0
The optimal solution is: x = [1.67, 1.67, 6.0, 3.0, -0.0, -0.0, 6.0]
\({c_B}^\top B^{-1} N - c_N\)
(c[[0, 1, 2, 3, 6]].T @ np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ A[:, [4, 5]]) - c[[4, 5]]
array([-0.33333333, -0.33333333])
Since reduced cost coefficients are <= 0 for minimization problem, solution is optimal.
\(B^{-1}N\)
np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ A[:, [4, 5]]
array([[ 0.03703704, -0.18518519],
[-0.18518519, -0.07407407],
[-0.66666667, -0.66666667],
[ 0.33333333, -0.66666667],
[-1. , -1. ]])
\(B^{-1}b\)
np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ b
array([1.66666667, 1.66666667, 6. , 3. , 6. ])
\({c_B}^\top B^{-1}b\)
c[[0, 1, 2, 3, 6]].T @ np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ b
5.0
From the primal optimal solution \((x_1=5/3,x_2=5/3,s_1=6,s_2=3,s_3=0,s_4=0,s_5=6)\), objective value \(5\), we can find the optimal dual solution \((y_1=0,y_2=0,y_3=1/3,y_4=1/3,y_5=0,t_1=0,t_2=0)\) because reduced cost coefficients of primal slack variables are the dual variable optimal solution and by complementary slackness we got that both the dual slack variables \(t_1, t_2\) are 0 because their corresponding \(x_1, x_2\) are not zero.
Looking at the constraints of the LP, we see that the constraint \(3x_1 + 6x_2 \geq 9\) renders the constraint \(2x_1 + 4x_2 \geq 4\) redundant. This makes the LP degenerate as it’s BFS will contain a basic variable with a value of 0. This makes the Primal Simplex slower to run as it will be wasting computations through adding artificial variables and going through Phase I of the Two-phase Simplex, therefore, it is advisable to convert the LP to its dual form and run the Dual Simplex method instead for LPs that are likely degenerate.
Chapter 6¶
6.2¶
Consider the linear programming problem
A = np.array([[3, 4, -1, 2, -1], [-1, 0, 4, 1, -2], [-2, 1, -2, 0, 5]])
b = np.array([14, 15, 10])
c = np.array([4, 3, 2, 6, 1])
(a) Verify that the matrix \(B = [A_{\bullet1} A_{\bullet3} A_{\bullet5}]\) is a feasible but not optimal basis for the above LP.
B = A[:, [0, 2, 4]]
NBV = A[:, [1, 3]]
print(f"B has full rank." if np.linalg.det(B) != 0 else f"B does not have full rank.")
B has full rank.
Since Determinant of \(B \not= 0\), by the Invertible Matrix Theorem, its column space spans \(\mathbb{R}^3\) and is therefore a basis for the LP with 3 constraints.
print(
f"B is feasible because B^-1b >= 0."
if np.alltrue(np.linalg.inv(B) @ b >= 0)
else f"B is not feasible because B^-1b < 0."
)
B is feasible because B^-1b >= 0.
print(
"Basis is optimal as reduced costs for minimization problem are <= 0!"
if np.alltrue(c[[0, 2, 4]] @ (np.linalg.inv(B) @ NBV) - c[[1, 3]] <= 0)
else f"Basis is not optimal as reduced costs for minimization problem are > 0!"
)
Basis is not optimal as reduced costs for minimization problem are > 0!
(b) Compute the inverse of \(\tilde{B} = [A_{\bullet2} A_{\bullet3} A_{\bullet5}]\) and verify that \(\tilde{B}\) is an optimal basis for the given LP.
B_tilde = A[:, [1, 2, 4]]
NBV_tilde = A[:, [0, 3]]
print(
"B_tilde has full rank."
if np.linalg.det(B_tilde) != 0
else f"B_tilde does not have full rank."
)
B_tilde has full rank.
Since Determinant of \(\tilde{B} \not= 0\), by the Invertible Matrix Theorem, its column space spans \(\mathbb{R}^3\) and is therefore a basis for the LP with 3 constraints.
print(
f"B_tilde is feasible because B_tilde^-1b >= 0."
if np.alltrue(np.linalg.inv(B_tilde) @ b >= 0)
else f"B_tilde is not feasible because B_tilde^-1b < 0."
)
B_tilde is feasible because B_tilde^-1b >= 0.
print(
"Basis is optimal as reduced costs for minimization problem are <= 0!"
if np.alltrue(c[[1, 2, 4]] @ (np.linalg.inv(B_tilde) @ NBV_tilde) - c[[0, 3]] <= 0)
else f"Basis is not optimal as reduced costs for minimization problem are > 0!"
)
Basis is optimal as reduced costs for minimization problem are <= 0!
(c) Perform ranging on the right-hand side of the given LP.
For the given basis \(\tilde{B}\) to remain optimal when the RHS of the LP changes, \(\tilde{B}^{-1}b\) needs to remain \(\geq 0\):
def δ_b_range(B, b):
B_inv = np.linalg.inv(B)
b_bar = B_inv @ b
for row_idx in range(B_inv.shape[0]):
δ_candidates = -b_bar / B_inv[:, row_idx]
δ_ub, δ_lb = (
min(δ_candidates[np.sign(B_inv[:, row_idx]) < 0]) if len(δ_candidates[np.sign(B_inv[:, row_idx]) < 0]) > 0 else np.inf,
max(δ_candidates[np.sign(B_inv[:, row_idx]) > 0]) if len(δ_candidates[np.sign(B_inv[:, row_idx]) > 0]) > 0 else -np.inf,
)
if np.allclose(δ_ub, 0):
δ_ub = 0
if np.allclose(δ_lb, 0):
δ_lb = 0
if δ_ub == 0 and δ_lb == 0:
print(f"δ_{row_idx+1} cannot be perturbed.")
else:
print(f"{δ_lb} <= δ_{row_idx+1} <= {δ_ub}.")
# Feasibility Checks
assert np.alltrue(δ_lb * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
"δ_lb", δ_lb * np.linalg.inv(B)[:, row_idx], -b_bar
)
assert np.alltrue(δ_ub * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
"δ_ub", δ_ub * np.linalg.inv(B)[:, row_idx], -b_bar
)
δ_b_range(B_tilde, b)
-24.3125 <= δ_1 <= 52.25.
-17.476190476190474 <= δ_2 <= inf.
-13.0625 <= δ_3 <= inf.
(d) Perform ranging on the cost coefficients of the given LP.
For the given basis \(\tilde{B}\) to remain optimal when the coefficients of the objective function changes, \({c_B}^\top\tilde{B}^{-1}\tilde{NBV} - {c_j}\) needs to remain \(\leq 0\) for a minimization problem:
\(\tilde{B}^{-1}\tilde{NBV}\)
np.linalg.inv(B_tilde) @ NBV_tilde
array([[ 0.41428571, 0.55714286],
[-0.61428571, 0.24285714],
[-0.72857143, -0.01428571]])
np.array([0, 2, 5]) / np.array([4, 5, 6])
array([0. , 0.4 , 0.83333333])
def c_range(A, c, basic_var_idxs, problem_type="min"):
assert problem_type == "min" or problem_type == "max", print(
"problem_type must be either 'min' or 'max'."
)
nonbasic_var_idxs = [idx for idx in range(len(c)) if idx not in basic_var_idxs]
B, N = A[:, basic_var_idxs], A[:, nonbasic_var_idxs]
B_inv = np.linalg.inv(B)
for idx in range(len(c)):
if idx in basic_var_idxs:
RHS_basic_var_cost_coeffs = -(B_inv @ N)[list(basic_var_idxs).index(idx), :]
LHS = (
c[
[
basic_var_idx
for basic_var_idx in basic_var_idxs
if basic_var_idx != idx
]
].T
@ (B_inv @ N)[
[
i
for i in range(len(basic_var_idxs))
if i != list(basic_var_idxs).index(idx)
],
:,
]
- c[nonbasic_var_idxs]
)
c_lb_RHS_basic_var_cost_coeffs_idxs = np.argwhere(
np.array(RHS_basic_var_cost_coeffs) > 0
).flatten()
c_ub_RHS_basic_var_cost_coeffs_idxs = np.argwhere(
np.array(RHS_basic_var_cost_coeffs) < 0
).flatten()
c_lb_candidates = (
(
LHS[c_lb_RHS_basic_var_cost_coeffs_idxs]
/ RHS_basic_var_cost_coeffs[c_lb_RHS_basic_var_cost_coeffs_idxs]
)
if len(c_lb_RHS_basic_var_cost_coeffs_idxs) > 0
else []
)
c_ub_candidates = (
(
LHS[c_ub_RHS_basic_var_cost_coeffs_idxs]
/ RHS_basic_var_cost_coeffs[c_ub_RHS_basic_var_cost_coeffs_idxs]
)
if len(c_ub_RHS_basic_var_cost_coeffs_idxs) > 0
else []
)
c_lb = max(c_lb_candidates) if len(c_lb_candidates) > 0 else -np.inf
c_ub = min(c_ub_candidates) if len(c_ub_candidates) > 0 else np.inf
print(f"{c_lb} <= c_{idx+1} <= {c_ub}")
else:
print(
f"c_{idx+1} >= {c[basic_var_idxs].T @ (B_inv @ N)[:, list(nonbasic_var_idxs).index(idx)]}"
)
c_range(A=A, c=c, basic_var_idxs=[1, 2, 4])
c_1 >= -0.7142857142857142
-inf <= c_2 <= 9.923076923076923
-5.674418604651164 <= c_3 <= 17.88235294117647
c_4 >= 2.142857142857143
-5.470588235294117 <= c_5 <= inf
Optimality Conditions
print(f"c_1 >= {c[[1, 2, 4]].T @ (np.linalg.inv(B_tilde) @ NBV_tilde)[:, 0]}")
c_1 >= -0.7142857142857142
print(
f"c_2 <= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[1, 2], 0] @ c[[2, 4]] - c[0]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[0, 0]}"
)
c_2 <= 14.379310344827585
print(
f"c_2 <= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[1, 2], 1] @ c[[2, 4]] - c[3]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[0, 1]}"
)
c_2 <= 9.923076923076923
print(
f"c_3 >= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 2], 0] @ c[[1, 4]] - c[0]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[1, 0]}"
)
c_3 >= -5.674418604651164
print(
f"c_3 <= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 2], 1] @ c[[1, 4]] - c[3]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[1, 1]}"
)
c_3 <= 17.88235294117647
print(f"c_4 >= {c[[1, 2, 4]].T @ (np.linalg.inv(B_tilde) @ NBV_tilde)[:, 1]}")
c_4 >= 2.142857142857143
print(
f"c_5 >= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 1], 0] @ c[[1, 2]] - c[0]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[2, 0]}"
)
c_5 >= -5.470588235294117
print(
f"c_5 >= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 1], 1] @ c[[1, 2]] - c[3]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[2, 1]}"
)
c_5 >= -269.00000000000017
Range for Cost Coefficients:
6.5¶
Solve the parametric linear program \(P_\lambda\)
where \(\lambda\) is a parameter belonging to the interval \([−15, 15]\). Organize your answer in a table which gives the optimal solution (including the objective function value) as a function of the parameter \(\lambda\).
We first split the range into 1. \(-12 \leq \lambda \leq -6\), 2. \(-15 \leq \lambda < -12\), 3. \(-6 < \lambda \leq 15\)
1st Range for \(\lambda: -12 \leq \lambda \leq -6\)
Simplex Tableau:
Optimal because reduced cost coefficients < 0 for min problem,
Hence, current basis is optimal for \(-12 \leq \lambda \leq -6\) and the optimal solution is \((x_1=0,x_2=0,x_3=0,s_1=12+\lambda, s_2=-6-\lambda)\), objective value is 0.
2nd Range for \(\lambda: -15 \leq \lambda < -12\)
Since \(12 + \lambda\) \(< 0\) in this range, we need a new basis, leaving variable \(s_1\), and entering \(x_2\) by max ratio test.
Simplex Tableau:
Optimal because reduced cost coefficients < 0 for min problem,
Hence, current basis is optimal and the optimal solution is \((x_1=0,x_2=-12-\lambda,x_3=0,s_1=0, s_2=-54-5\lambda)\).
3rd Range for \(\lambda: -6 < \lambda \leq 15\)
Since \(-6 - \lambda\) \(< 0\) in this range, we need a new basis, leaving variable \(s_2\), and entering \(x_3\) by max ratio test.
Simplex Tableau:
Optimal because reduced cost coefficients < 0 for min problem,
Hence, current basis is optimal and the optimal solution is \((x_1=0,x_2=0,x_3=3/4+1/8\lambda,s_1=33/4 + 3/8\lambda, s_2=0)\).
6.8¶
The LP of Example 3.8 is of the form commonly encountered in optimal resource allocation problems where the right-hand side represents amounts of resources and the coefficient matrix represents the how much the different activities consume of the different resources when each activity is operated at unit level. The objective function coefficients are then the contributions of the activities when operated at unit level. With this as an interpretation of the model, how much would an additional unit of each resource add to the optimal value of the objective?
Example 3.8
c = np.array([-4, -6, -3, -1])
A = np.array([[1,4,8,6], [4,1,2,1], [2,3,1,2]])
b = np.array([11, 7, 2])
# Create one vector optimization variable.
x = cp.Variable((4,), integer=False)
# Create constraints.
constraints = [
A @ x <= b,
x >= 0,
]
# Form objective.
obj = cp.Minimize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: -5.33
The optimal solution is: x = [0.33, 0.0, 1.33, 0.0]
We can determine how much each additional unit of resource adds to the objective by examining the dual variables.
constraints[0].dual_value
array([0.13333333, 0. , 1.93333333])
Hence, adding 1 unit to the 1st constraint increases the objective value by 1/3, adding 1 unit to the 2nd constraint increases the objective value by 0, adding 1 unit to the 3rd constraint increases the objective value by 29/15.
6.13¶
Consider the linear program \(P(\lambda)\)
A = np.array([[5, 3, 2, -2, 1], [1, 1, 1, -1, 1]])
c = np.array([3, 4, 2, -1, 1])
d = np.array([0, 2, 0, 0, -3])
b = np.array([20, 8])
(a) Show that \(B = [A_{\bullet2} A_{\bullet3}]\) is a feasible basis for this problem.
B = A[:, [1, 2]]
NBV = A[:, [0, 3, 4]]
print(f"B has full rank." if np.linalg.det(B) != 0 else f"B does not have full rank.")
B has full rank.
Since Determinant of \(B \not= 0\), by the Invertible Matrix Theorem, its column space spans \(\mathbb{R}^2\) and is therefore a basis for the LP with 2 constraints.
print(
f"B is feasible because B^-1b >= 0."
if np.alltrue(np.linalg.inv(B) @ b >= 0)
else f"B is not feasible because B^-1b < 0."
)
B is feasible because B^-1b >= 0.
(b) For what range of \(\lambda\) values is \(B\) an optimal basis for \(P(\lambda)\)?
For \(B\) to be an optimal basis for \(P(\lambda)\), we need \({c_B}^\top B^{-1} N - c_N \leq 0\) for minimization problem:
Hence, \(\lambda \leq -5/6\).
(c) Solve this parametric linear programming problem for all \(\lambda \leq −1/4\). That is, find the set of values of the parameter \(\lambda\) for which \(P(\lambda)\) has an optimal solution, and for all such values, find an optimal point \(\bar{x}(\lambda)\) and the corresponding optimal objective value \(\bar{z}(\lambda)\).
We first split the range for \(\lambda\) into 1. \(\lambda \leq -5/6\) and 2. \(-5/6 < \lambda \leq -1/4\).
Simplex Tableau:
Simplex Tableau with feasible basis \(B = [A_{\bullet2} A_{\bullet3}]\):
For the first range of \(\lambda\), \(\lambda \leq -5/6\), the reduced cost coefficients \(5+6\lambda < 0\) and \(-1+\lambda < 0\), meaning that the solution \((x_1=0, x_2=4, x_3=4, x_4=0, x_5=0)\) is optimal for a minimization problem and our objective is \(24+8\lambda\).
However, for the 2nd range of \(\lambda\), \(-5/6 < \lambda \leq -1/4\), the reduced cost coefficients \(5+6\lambda > 0\) and \(-1+\lambda < 0\), which means that the feasible basis isnt optimal anymore, we will change basis to \(B = [A_{\bullet1} A_{\bullet3}]\) because min ratio test chooses leaving variable \(x_2\) and we enter \(x_1\):
Simplex Tableau with feasible basis \(B = [A_{\bullet1} A_{\bullet3}]\):
Hence, \(-5/6 < \lambda \leq -1/4\), our reduced cost coefficients are \(\leq 0\) for a minimization and hence optimal. Our optimal solution tghen becomes \((x_1=4/3,x_2=0,x_3=20/3,x_4=0, x_5=0)\), and objective value is \(52/3\).
6.16¶
Let
The first four columns of the matrix \(A\) constitute an optimal basis \(B\) in the linear program
Dual
A = np.array(
[[3, 2, 1, 0, 4, -1], [2, 4, 2, -4, 3, 2], [1, 2, 3, 4, 5, 6], [0, -4, 4, 5, -2, 1]]
)
B = A[:, :4]
b = np.array([11, 22, 21, 8])
c = np.array([11, 14, 17, 1, 26, 20])
# Create one vector optimization variable.
x = cp.Variable((6,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Minimize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 127.0
The optimal solution is: x = [0.0, 3.0, 5.0, -0.0, 0.0, 0.0]
(a) Find the inverse of \(B\).
np.linalg.inv(B)
array([[ 0.44736842, -0.09210526, -0.15789474, 0.05263158],
[-0.09210526, 0.02631579, 0.22368421, -0.15789474],
[-0.15789474, 0.22368421, 0.02631579, 0.15789474],
[ 0.05263158, -0.15789474, 0.15789474, -0.05263158]])
(b) Find the basic solution corresponding to B and verify that it is a degenerate optimal solution of the LP.
x = np.zeros(c.shape)
x[:4] = np.linalg.inv(B) @ b
print(
f"Basic Solution: {['x_' + str(i+1) + '=' + str(np.round(x_i, 2)) for i, x_i in enumerate(x)]}"
)
Basic Solution: ['x_1=0.0', 'x_2=3.0', 'x_3=5.0', 'x_4=0.0', 'x_5=0.0', 'x_6=0.0']
Since the primal basic variables \(x_1, x_2, x_3, x_4\) corresponding to basis contain 0 for their values, it means that the LP is degenerate, there were redundant constraints.
(c) Find the optimal solution of the dual problem.
By Complementary Slackness, we know that dual slack variables \(t_k, k \in [1, 2, 3, 4, 5, 6]\) will be:
Plugging values into the dual constraints:
Dual Optimal Solution:
c[0:4]@np.linalg.inv(B)
array([1., 3., 2., 1.])
\(\therefore\) Dual Optimal Solution \(\bar{y}\):
(d) For each of the components of the right-hand side vector b determine whether (positive or negative) perturbations of that component yield a feasible solution of the LP. (Hint: See Section 6.2.)
RHS, Non-zero basic solution
For basis to remain feasible, \(\bar{b} + B^{-1}\delta b \geq 0\)
def δ_b_range(B, b):
B_inv = np.linalg.inv(B)
b_bar = B_inv @ b
for row_idx in range(B_inv.shape[0]):
δ_candidates = -b_bar / B_inv[:, row_idx]
δ_ub, δ_lb = (
min(δ_candidates[np.sign(B_inv[:, row_idx]) < 0]),
max(δ_candidates[np.sign(B_inv[:, row_idx]) > 0]),
)
if np.allclose(δ_ub, 0):
δ_ub = 0
if np.allclose(δ_lb, 0):
δ_lb = 0
if δ_ub == 0 and δ_lb == 0:
print(f"δ_{row_idx+1} cannot be perturbed.")
else:
print(f"{δ_lb} <= δ_{row_idx+1} <= {δ_ub}.")
# Feasibility Checks
assert np.alltrue(δ_lb * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
"δ_lb", δ_lb * np.linalg.inv(B)[:, row_idx], -b_bar
)
assert np.alltrue(δ_ub * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
"δ_ub", δ_ub * np.linalg.inv(B)[:, row_idx], -b_bar
)
δ_b_range(B, b)
0 <= δ_1 <= 31.666666666666668.
-22.352941176470587 <= δ_2 <= 0.
δ_3 cannot be perturbed.
δ_4 cannot be perturbed.
(e) What does all this say about giving an economic interpretation to the shadow prices in this LP?
c[0:4] @ np.linalg.inv(B)
array([1., 3., 2., 1.])
The dual variables (shadow prices) can be interpreted as how much the objective value will increase if the RHS of the primal constraint is increased. E.g. if \(b_1 = 11 + (\delta_1=1)\) instead of \(11\) (possible because \(\delta_1=1\) is within the range of pertubation as shown in the previous part), the objective will increase by 1. By complementary slackness, if the dual variable is 0, it means that the primal constraint corresponding to that is not (active / binding). If the dual variable is \(> 0\), it means that the constraint is (active / binding).
(f) Investigate what can be said about shadow prices in other LPs having degenerate optimal basic solutions such that only one basic variable has value zero?
The shadow price will be 0 for that basic variable whose value is 0.
Extra Question¶
Consider the following linear program:
Parts (b) through (e) below are independent of each other.
Standard Form
A = np.array([[1, 2, 1, 1, 0], [-1, 1, -2, 0, 1]])
b = np.array([8, 4])
c = np.array([2, 1, -1, 0, 0])
(a) You are given that an optimal solution of the problem has \(x_1\) and \(x_5\) as the basic variables. Compute this solution.
Final Tableau for Optimal Solution:
np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])
(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]) - c[[1, 2, 3]]
array([3., 3., 2.])
Since reduced cost coefficients are >= 0 for maximization problem, solution is optimal.
Optimal Solution: \((x_1=8, x_2=0, x_3=0, x_4=0, x_5=12)\)
# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Maximize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 16.0
The optimal solution is: x = [8.0, 0.0, -0.0, 0.0, 12.0]
(b) Using sensitivity analysis, find a new optimal solution if the coefficient of \(x_2\) in the objective function is changed from 1 to 5.
A = np.array([[1, 2, 1, 1, 0], [-1, 1, -2, 0, 1]])
b = np.array([8, 4])
c = np.array([2, 5, -1, 0, 0])
\({c_B}^\top B^{-1} N - c_N\)
(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]) - c[[1, 2, 3]]
array([-1., 3., 2.])
Since reduced cost coefficients are not >= 0 for maximization problem, solution is not optimal anymore.
\(B^{-1}N\)
np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]
array([[ 2., 1., 1.],
[ 3., -1., 1.]])
\(B^{-1}b\)
np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])
\({c_B}^\top B^{-1}b\)
c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ b
16.0
\(x_2\) is entering variable, \(x_5\) is chosen as a leaving variable by min ratio test.
Since reduced cost coefficients are >= 0 for max problem, and RHS > 0, new optimal solution: \((x_1=0, x_2=4, x_3=0, x_4=0, x_5=0)\)
# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Maximize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 20.0
The optimal solution is: x = [0.0, 4.0, -0.0, 0.0, 0.0]
(c) Suppose that the coefficient of \(x_2\) in the first constraint is changed from 2 to 1/6. Using sensitivity analysis, find a new optimal solution.
A = np.array([[1, 1 / 6, 1, 1, 0], [-1, 1, -2, 0, 1]])
b = np.array([8, 4])
c = np.array([2, 1, -1, 0, 0])
\({c_B}^\top B^{-1} N - c_N\)
(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]) - c[[1, 2, 3]]
array([-0.66666667, 3. , 2. ])
Since reduced cost coefficients are not >= 0 for maximization problem, solution is not optimal anymore.
\(B^{-1}N\)
np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]
array([[ 0.16666667, 1. , 1. ],
[ 1.16666667, -1. , 1. ]])
\(B^{-1}b\)
np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])
\({c_B}^\top B^{-1}b\)
c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ b
16.0
\(x_2\) is entering variable, \(x_5\) is chosen as a leaving variable by min ratio test.
Since reduced cost coefficients are >= 0 for max problem, and RHS > 0, new optimal solution: \((x_1=6.29, x_2=10.29, x_3=0, x_4=0, x_5=0)\)
Double Check:
# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Maximize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 22.86
The optimal solution is: x = [6.29, 10.29, 0.0, 0.0, 0.0]
\({c_B}^\top B^{-1} N - c_N\)
(c[[0, 1]].T @ np.linalg.inv(A[:, [0, 1]]) @ A[:, [2, 3, 4]]) - c[[2, 3, 4]]
array([2.42857143, 2.57142857, 0.57142857])
Since reduced cost coefficients are >= 0 for maximization problem, solution is optimal.
\(B^{-1}N\)
np.linalg.inv(A[:, [0, 1]]) @ A[:, [2, 3, 4]]
array([[ 1.14285714, 0.85714286, -0.14285714],
[-0.85714286, 0.85714286, 0.85714286]])
\(B^{-1}b\)
np.linalg.inv(A[:, [0, 1]]) @ b
array([ 6.28571429, 10.28571429])
\({c_B}^\top B^{-1}b\)
c[[0, 1]].T @ np.linalg.inv(A[:, [0, 1]]) @ b
22.857142857142858
(d) Suppose that the following constraint is added to the problem: \(x_2 + 2x_3 = 3\). Using sensitivity analysis, find the new optimal solution.
A = np.array([[1, 2, 1, 1, 0], [-1, 1, -2, 0, 1], [0, 1, 2, 0, 0]])
b = np.array([8, 4, 3])
c = np.array([2, 1, -1, 0, 0])
Optimal Solution: \((x_1=8, x_2=0, x_3=0, x_4=0, x_5=12)\)
\(0 + 2(0) \not= 3, \therefore \) Current optimal solution is infeasible. Hence, we have to add the new constraint and resolve the system using the Dual Simplex.
Make \(x_2\) the pivot by min ratio test on 3rd row:
Make \(x_3\) entering variable, \(x_2\) leaving:
Since reduced cost coefficients >= 0 for max problem [OPTIMAL] and RHS Basic Variables >= 0 [FEASIBLE], dual simplex terminates.
Optimal solution: \((x_1=6.5, x_2=0, x_3=1.5, x_4=0, x_5=13.5)\)
Double Check:
# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Maximize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 11.5
The optimal solution is: x = [6.5, 0.0, 1.5, 0.0, 13.5]
(e) Suppose that a new activity \(x_6\) is proposed with unit return 6 and consumption vector (2, 1). Using sensitivity analysis, find a new optimal solution.
A = np.array([[1, 2, 1, 1, 0, 2], [-1, 1, -2, 0, 1, 1]])
b = np.array([8, 4])
c = np.array([2, 1, -1, 0, 0, 6])
\({c_B}^\top B^{-1} N - c_N\)
(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3, 5]]) - c[[1, 2, 3, 5]]
array([ 3., 3., 2., -2.])
Since reduced cost coefficients are not all >= 0 for maximization problem, solution is not optimal anymore, we might want to produce some of activity 6.
\(B^{-1}N\)
np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3, 5]]
array([[ 2., 1., 1., 2.],
[ 3., -1., 1., 3.]])
\(B^{-1}b\)
np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])
\({c_B}^\top B^{-1}b\)
c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ b
16.0
Choose \(x_6\) as entering variable, and \(x_5\) as leaving variable by min ratio test in normal simplex method and break tie by choosing slack var,
Since reduced cost coefficients >= 0 for max problem [OPTIMAL] and RHS Basic Variables >= 0 [FEASIBLE], dual simplex terminates.
Optimal solution: \((x_1=0, x_2=0, x_3=0, x_4=0, x_5=4)\)
Double Check:
# Create one vector optimization variable.
x = cp.Variable((6,), integer=False)
# Create constraints.
constraints = [
A @ x == b,
x >= 0,
]
# Form objective.
obj = cp.Maximize(c.T @ x)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()
print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 24.0
The optimal solution is: x = [0.0, 0.0, -0.0, 0.0, 0.0, 4.0]