HW 3

ISE-530 Homework III: Chapters V and VI of Cottle-Thapa. Due 11:59 PM Wednesday September 30, 2020

  • Exercises 5.4, 5.6, 5.8, and 5.9.

  • Exercises 6.2, 6.5, 6.8, 6.13, and 6.16.

%load_ext nb_black
%matplotlib inline

import matplotlib.pyplot as plt

plt.rcParams["figure.dpi"] = 300

import numpy as np
import cvxpy as cp
from collections import namedtuple

Chapter 5

5.4

Find the dual of the following linear program

(156)\[\begin{align} \text{minimize }&c_1x_1 + c_2x_2 + c_3x_3 \\ \text{subject to } &a_{11}x_1 + a_{12}x_2 + a_{13}x_3 \geq b_1 \\ &a_{21}x_1 + a_{22}x_2 + a_{23}x_3 = b_2 \\ &a_{31}x_1 + a_{32}x_2 + a_{33}x_3 \leq b_3 \\ &x_1 \geq 0, x_2 \leq 0, x_3 \text{ free}. \end{align}\]

Find the dual of the dual and thereby show that the dual of the dual is the primal.

Dual

(157)\[\begin{align} \text{maximize }&b_1y_1 + b_2y_2 + b_3y_3 \\ \text{subject to } &a_{11}y_1 + a_{21}y_2 + a_{31}y_3 \leq c_1 \\ &a_{12}y_1 + a_{22}y_2 + a_{32}y_3 \geq c_2 \\ &a_{13}y_1 + a_{23}y_2 + a_{33}y_3 = c_3 \\ &y_1 \leq 0, y_2 \text{ free}, y_3 \geq 0. \end{align}\]

Dual of Dual

(158)\[\begin{align} \text{minimize }&c_1x_1 + c_2x_2 + c_3x_3 \\ \text{subject to } &a_{11}x_1 + a_{12}x_2 + a_{13}x_3 \geq b_1 \\ &a_{21}x_1 + a_{22}x_2 + a_{23}x_3 = b_2 \\ &a_{31}x_1 + a_{32}x_2 + a_{33}x_3 \leq b_3 \\ &x_1 \geq 0, x_2 \leq 0, x_3 \text{ free}. \end{align}\]

\(\therefore\) Dual of the Dual is the primal.

5.6

Consider the linear program

(159)\[\begin{align} \text{minimize }&c^\top x \\ \text{subject to } &Ax \geq b \\ &x \geq 0 \end{align}\]

in which

\[\begin{equation*} A = \begin{bmatrix} 3 & -1 & 2 \\ 4 & 5 & 0 \\ -6 & 9 & 5 \\ 1 & 1 & 1 \\ \end{bmatrix} \,\,,\,\, b = \begin{bmatrix} 15 \\ 8 \\ 13 \\ 4 \\ \end{bmatrix} \,\,,\,\, c = \begin{bmatrix} 2 \\ 3 \\ 5 \\ \end{bmatrix} \end{equation*}\]

Use the optimality criteria (see page 127) to decide whether or not \(\bar{x} = (2,0,5)\) is an optimal solution of the LP. [Hint: Assume that \(\bar{x}\) is an optimal solution. Use the necessary conditions of optimality to determine what the vector \(\bar{y}\) would have to be. Check to make sure that all the conditions of optimality are met.]

Dual

(160)\[\begin{align} \text{maximize }&b^\top y \\ \text{subject to } &A^\top y \leq c \\ &y \geq 0 \end{align}\]
A = np.array([[3, -1, 2], [4, 5, 0], [-6, 9, 5], [1, 1, 1]])
b = np.array([15, 8, 13, 4])
c = np.array([2, 3, 5])
x_bar = np.array([2, 0, 5])
# Create one vector optimization variable.
x = cp.Variable((3,), integer=False)

# Create constraints.
constraints = [
    A @ x >= b,
    x >= 0,
]

# Form objective.
obj = cp.Minimize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 27.59
The optimal solution is: x = [1.89, 0.09, 4.71]

We will recover the Dual Optimal solution \(\bar{y}\) from Primal Optimal Solution \(\bar{x}\) using the Complementary Slackness Theorem:

A @ x_bar
array([16,  8, 13,  7])
b
array([15,  8, 13,  4])

\(\therefore\) primal feasibility.

Adding the primal slack variables \(s_j, j \in [1,2,3,4]\) into the optimal primal solution \(\bar{x}\):

(161)\[\begin{align} x_1 &= 2 \\ x_2 &= 0 \\ x_3 &= 5 \\ s_1 &= 1 \\ s_2 &= 0 \\ s_3 &= 0 \\ s_4 &= 3 \\ \end{align}\]

By Complementary Slackness, we know that the optimal dual solution with dual slack variables \(t_k, k \in [1,2,3]\):

(162)\[\begin{align} y_1 &= 0 \because s_1\not=0 \\ y_2 &\not= 0 \because s_2=0 \\ y_3 &\not= 0 \because s_3=0 \\ y_4 &= 0 \because s_4\not=0 \\ t_1 &= 0 \because x_1\not=0 \\ t_2 &\not= 0 \because x_2=0 \\ t_3 &= 0 \because x_3\not=0 \\ \end{align}\]

Plugging values into the dual constraints:

(163)\[\begin{align} 3(y_1=0) + 4y_2 - 6y_3 + 1(y_4=0) + (t_1=0) &= 2 \\ 2(y_1=0) + 0y_2 + 5y_3 + 1(y_4=0) + (t_3=0) &= 5 \\ \big\downarrow \\ 4y_2 - 6y_3 &= 2 \\ 5y_3 &= 5 \\ \big\downarrow \\ \therefore y_2 = 2 \text{ and }& y_3 = 1 \end{align}\]

\(\therefore\) Dual Optimal Solution \(\bar{y}\):

(164)\[\begin{align} y_1 &= 0 \\ y_2 &= 2 \\ y_3 &= 1 \\ y_4 &= 0 \\ t_1 &= 0 \\ t_2 &= -16 \\ t_3 &= 0 \\ \end{align}\]
y_bar = np.array([0, 2, 1, 0])

A vector \(\bar{x}\) is optimal iff \(\exists \bar{y}\) such that the following 4 optimality conditions hold:

Primal Feasibility \(A\bar{x} \geq b, \bar{x} \geq 0\)

print(
    f"Primal Feasible!"
    if np.alltrue(A @ x_bar >= b) & np.alltrue(x_bar >= 0)
    else "Primal Infeasible!"
)
Primal Feasible!

Dual Feasibility \(\bar{y}^\top A \leq c^\top, \bar{y} \geq 0\)

print(
    f"Dual Feasible!"
    if np.alltrue(y_bar.T @ A <= c.T) & np.alltrue(y_bar >= 0)
    else "Dual Infeasible!"
)
Dual Infeasible!

Complementary Slackness 1: \(\bar{y}^\top (A\bar{x} - b) = 0 \because\) if primal slack variables \(\not= 0\), dual variables \(= 0\), vice versa

print(
    f"1st Complementary Slackness Passed!"
    if y_bar.T @ (A @ x_bar - b) == 0
    else f"1st Complementary Slackness Failed!"
)
1st Complementary Slackness Passed!

Complementary Slackness 2: \((\bar{y}^\top A - c^\top)\bar{x} = 0 \because\) if dual slack variables \(\not= 0\), primal variables \(= 0\), vice versa

print(
    f"2nd Complementary Slackness Passed!"
    if (y_bar.T @ A - c.T) @ x_bar == 0
    else f"2nd Complementary Slackness Failed!"
)
2nd Complementary Slackness Passed!

Since the dual feasibility optimality condition failed, \(\bar{x}\) is not an optimal solution.

5.8

Consider the linear program

(165)\[\begin{align} \text{minimize } &37x_1 + 29x_2 + 33x_3 \\ \text{subject to } &x_2 + x_3 \geq 1 \\ &x_1 + x_2 \geq 1 \\ &x_1 + x_2 + x_3 \geq 1 \\ &x_1 + x_3 \geq 1 \\ &x_j \geq 0, j=1,2,3. \end{align}\]

Dual

(166)\[\begin{align} \text{maximize } &y_1 + y_2 + y_3 + y_4 \\ \text{subject to } &0y_1 + y_2 + y_3 + y_4 \leq 37 \\ &y_1 + y_2 + y_3 + 0y_4 \leq 29 \\ &y_1 + 0y_2 + y_3 + y_4 \leq 33 \\ &y_k \geq 0, k=1,2,3. \end{align}\]
A = np.array(
    [
        [0, -1, -1, 1, 0, 0, 0],
        [-1, -1, 0, 0, 1, 0, 0],
        [-1, -1, -1, 0, 0, 1, 0],
        [-1, 0, -1, 0, 0, 0, 1],
    ]
)
b = np.array([-1, -1, -1, -1])
c = np.array([37, 29, 33, 0, 0, 0, 0])
# Create one vector optimization variable.
x = cp.Variable((7,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Minimize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 49.5
The optimal solution is: x = [0.5, 0.5, 0.5, -0.0, 0.0, 0.5, -0.0]

\({c_B}^\top B^{-1} N - c_N\)

(c[[0, 1, 2, 5]].T @ np.linalg.inv(A[:, [0, 1, 2, 5]]) @ A[:, [3, 4, 6]]) - c[[3, 4, 6]]
array([-12.5, -16.5, -20.5])

Since reduced cost coefficients are <= 0 for minimization problem, solution is optimal.

\(B^{-1}N\)

np.linalg.inv(A[:, [0, 1, 2, 5]]) @ A[:, [3, 4, 6]]
array([[ 0.5, -0.5, -0.5],
       [-0.5, -0.5,  0.5],
       [-0.5,  0.5, -0.5],
       [-0.5, -0.5, -0.5]])

\(B^{-1}b\)

np.linalg.inv(A[:, [0, 1, 2, 5]]) @ b
array([0.5, 0.5, 0.5, 0.5])

\({c_B}^\top B^{-1}b\)

c[[0, 1, 2, 5]].T @ np.linalg.inv(A[:, [0, 1, 2, 5]]) @ b
49.5

(a) Use the Dual Simplex Algorithm to solve this LP.

To use the Dual Simplex method, we have to fulfill the

  1. Optimality condition (all reduced cost coefficients \(<= 0\) for a minimization problem) and

  2. Infeasibility condition (RHS of constraints \(< 0\))

Simplex Tableau:

(167)\[\begin{align} &\begin{array}{c} \\ z \\ s_1 \\ s_2 \\ s_3 \\ s_4 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & s_4 & b \\ \hline 1 & -37 & -29 & -33 & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & -1 \\ 0 & -1 & -1 & 0 & 0 & 1 & 0 & 0 & -1 \\ 0 & -1 & -1 & -1 & 0 & 0 & 1 & 0 & -1 \\ 0 & -1 & 0 & -1 & 0 & 0 & 0 & 1 & -1 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Since both conditions for Dual Simplex are satisfied, we can proceed.

(168)\[\begin{align} \text{Leaving Variable: }s_1\,\because < 0 \text{ RHS constraint},\,\text{Entering Variable: }s_2\,\because \text{argmax}\Big(\frac{-29}{1}, \frac{-33}{1}\Big) = x_2 \rightarrow& \begin{array}{c} \\ z \\ x_2 \\ s_2 \\ s_3 \\ s_4 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & s_4 & b \\ \hline 1 & -37 & 0 & -4 & 29 & 0 & 0 & 0 & 29 \\ \hline 0 & 0 & 1 & 1 & -1 & 0 & 0 & 0 & 1 \\ 0 & -1 & 0 & 1 & -1 & 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & -1 & 0 & 1 & 0 & 0 \\ 0 & -1 & 0 & -1 & 0 & 0 & 0 & 1 & -1 \\ \end{array} \end{bmatrix} \\ \end{align}\]
(169)\[\begin{align} \text{Leaving Variable: }s_4\,\because < 0 \text{ RHS constraint},\,\text{Entering Variable: }s_2\,\because \text{argmax}\Big(\frac{-37}{1}, \frac{-4}{1}\Big) = x_3 \rightarrow& \begin{array}{c} \\ z \\ x_2 \\ s_2 \\ s_3 \\ x_3 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & s_4 & b \\ \hline 1 & -33 & 0 & 0 & 29 & 0 & 0 & -4 & 33 \\ \hline 0 & -1 & 1 & 0 & -1 & 0 & 0 & 1 & 0 \\ 0 & -2 & 0 & 0 & -1 & 1 & 0 & 1 & -1 \\ 0 & -1 & 0 & 0 & -1 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & -1 & 1 \\ \end{array} \end{bmatrix} \\ \end{align}\]
(170)\[\begin{align} \text{Leaving Variable: }s_2\,\because < 0 \text{ RHS constraint},\,\text{Entering Variable: }s_1\,\because \text{argmax}\Big(\frac{-33}{2}, \frac{29}{1}\Big) = s_1 \rightarrow& \begin{array}{c} \\ z \\ x_2 \\ s_1 \\ s_3 \\ x_3 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & s_4 & b \\ \hline 1 & -91 & 0 & 0 & 0 & 29 & 0 & 25 & 4 \\ \hline 0 & 1 & 1 & 0 & 0 & -1 & 0 & 0 & 1 \\ 0 & 2 & 0 & 0 & 1 & -1 & 0 & -1 & 1 \\ 0 & 1 & 0 & 0 & 0 & -1 & 1 & -1 & 1 \\ 0 & 1 & 0 & 1 & 0 & 0 & 0 & -1 & 1 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Few more pivots and max ratio tests, when RHS \(\geq 0\) [FEASIBLE] and reduced cost coefficients <= 0 [OPTIMAL], the method terminates.

(171)\[\begin{align} \vdots \end{align}\]
(172)\[\begin{align} \begin{array}{c} \\ z \\ x_1 \\ x_2 \\ x_3 \\ s_3 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & s_3 & s_4 & b \\ \hline 49.5 & 0 & 0 & 0 & -12.5 & -16.5 & 0 & -20.5 & - \\ \hline 0 & 1 & 0 & 0 & 0.5 & -0.5 & 0 & 0.5 & 0.5 \\ 0 & 0 & 1 & 0 & -0.5 & -0.5 & 0 & 0.5 & 0.5 \\ 0 & 0 & 0 & 1 & -0.5 & 0.5 & 0 & -0.5 & 0.5 \\ 0 & 0 & 0 & 0 & -0.5 & -0.5 & 1 & -0.5 & 0.5 \\ \end{array} \end{bmatrix} \\ \end{align}\]

(b) Write down the optimal primal solution and the optimal dual solution.

Optimal Primal Solution: \((x_1=0.5, x_2=0.5, x_3=0.5, s_1=0, s_2=0, s_3=0.5, s_4=0)\)

With \(t_k, k\in [1,2,3]\) are dual’s slack variables, Optimal Dual Solution: \((y_1=12.5, y_2=16.5, y_3=0, y_4=20.5, t_1=0, t_2=0, t_3=0)\)

(c) Verify that the complementary slackness conditions hold.

Complementary Slackness 1: \(\bar{y}^\top (A\bar{x} - b) = 0 \because\) if primal slack variables \(\not= 0\), dual variables \(= 0\), vice versa

x_bar = np.array([0.5, 0.5, 0.5, 0, 0, 0.5, 0])
y_bar = np.array([12.5, 16.5, 0, 20.5, 0, 0, 0])
print(
    f"1st Complementary Slackness Passed!"
    if y_bar[[0, 1, 2, 3]].T @ (-A[:, [0, 1, 2]] @ x_bar[[0, 1, 2]] - (-b)) == 0
    else f"1st Complementary Slackness Failed!"
)
1st Complementary Slackness Passed!

Complementary Slackness 2: \((\bar{y}^\top A - c^\top)\bar{x} = 0 \because\) if dual slack variables \(\not= 0\), primal variables \(= 0\), vice versa

print(
    f"2nd Complementary Slackness Passed!"
    if (y_bar[[0, 1, 2, 3]].T @ -A[:, [0, 1, 2]] - c[[0, 1, 2]].T) @ x_bar[[0, 1, 2]]
    == 0
    else f"2nd Complementary Slackness Failed!"
)
2nd Complementary Slackness Passed!

5.9

Solve the following LP by the Dual Simplex Algorithm. Explain why it is advisable to solve such LPs by the Dual Simplex Algorithm.

(173)\[\begin{align} \text{minimize } &x_1 + 2x_2 \\ \text{subject to } &2x_1 + 4x_2 \geq 4 \\ &4x_1− x_2 \geq 2 \\ &−2x_1 + 5x_2 \geq 5 \\ &5x_1 +1x_2 \geq 10 \\ &3x_1 + 6x_2 \geq 9 \\ &x_j \geq 0, j=1,2. \end{align}\]

Write down the optimal primal solution and the optimal dual solution.

Dual

(174)\[\begin{align} \text{maximize } &4y_1 + 2y_2 + 5y_3 + 10y_4 + 9y_5 \\ \text{subject to } &2y_1 + 4y_2 -2y_3 + 5y_4 + 3y_5 \leq 1 \\ &4y_1 - y_2 + 5y_3 + 1y_4 + 6y_5 \leq 2 \\ &y_k \geq 0, k=1,2,3,4,5. \end{align}\]
A = np.array(
    [
        [-2, -4, 1, 0, 0, 0, 0],
        [-4, 1, 0, 1, 0, 0, 0],
        [2, -5, 0, 0, 1, 0, 0],
        [-5, -1, 0, 0, 0, 1, 0],
        [-3, -6, 0, 0, 0, 0, 1],
    ]
)
b = np.array([-4, -2, -5, -10, -9])
c = np.array([1, 2, 0, 0, 0, 0, 0])
# Create one vector optimization variable.
x = cp.Variable((7,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Minimize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 5.0
The optimal solution is: x = [1.67, 1.67, 6.0, 3.0, -0.0, -0.0, 6.0]

\({c_B}^\top B^{-1} N - c_N\)

(c[[0, 1, 2, 3, 6]].T @ np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ A[:, [4, 5]]) - c[[4, 5]]
array([-0.33333333, -0.33333333])

Since reduced cost coefficients are <= 0 for minimization problem, solution is optimal.

\(B^{-1}N\)

np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ A[:, [4, 5]]
array([[ 0.03703704, -0.18518519],
       [-0.18518519, -0.07407407],
       [-0.66666667, -0.66666667],
       [ 0.33333333, -0.66666667],
       [-1.        , -1.        ]])

\(B^{-1}b\)

np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ b
array([1.66666667, 1.66666667, 6.        , 3.        , 6.        ])

\({c_B}^\top B^{-1}b\)

c[[0, 1, 2, 3, 6]].T @ np.linalg.inv(A[:, [0, 1, 2, 3, 6]]) @ b
5.0
(175)\[\begin{align} \vdots \end{align}\]
(176)\[\begin{align} \begin{array}{c} \\ z \\ x_1 \\ x_2 \\ s_1 \\ s_2 \\ s_5 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccccc|c} z & x_1 & x_2 & s_1 & s_2 & s_3 & s_4 & s_5 & b \\ \hline 5 & 0 & 0 & 0 & 0 & -1/3 & -1/3 & 0 & - \\ \hline 0 & 1 & 0 & 0 & 0 & 0.03703704 & -0.18518519 & 0 & 5/3 \\ 0 & 0 & 1 & 0 & 0 & -0.18518519 & -0.07407407 & 0 & 5/3 \\ 0 & 0 & 0 & 1 & 0 & -0.66666667 & -0.66666667 & 0 & 6 \\ 0 & 0 & 0 & 0 & 1 & 0.33333333 & -0.66666667 & 0 & 3 \\ 0 & 0 & 0 & 0 & 0 & -1 & -1 & 1 & 6 \\ \end{array} \end{bmatrix} \\ \end{align}\]

From the primal optimal solution \((x_1=5/3,x_2=5/3,s_1=6,s_2=3,s_3=0,s_4=0,s_5=6)\), objective value \(5\), we can find the optimal dual solution \((y_1=0,y_2=0,y_3=1/3,y_4=1/3,y_5=0,t_1=0,t_2=0)\) because reduced cost coefficients of primal slack variables are the dual variable optimal solution and by complementary slackness we got that both the dual slack variables \(t_1, t_2\) are 0 because their corresponding \(x_1, x_2\) are not zero.

Looking at the constraints of the LP, we see that the constraint \(3x_1 + 6x_2 \geq 9\) renders the constraint \(2x_1 + 4x_2 \geq 4\) redundant. This makes the LP degenerate as it’s BFS will contain a basic variable with a value of 0. This makes the Primal Simplex slower to run as it will be wasting computations through adding artificial variables and going through Phase I of the Two-phase Simplex, therefore, it is advisable to convert the LP to its dual form and run the Dual Simplex method instead for LPs that are likely degenerate.


Chapter 6

6.2

Consider the linear programming problem

(177)\[\begin{align} \text{minimize } &4x_1 + 3x_2 + 2x_3 + 6x_4 + x_5 \\ \text{subject to } &3x_1 +4x_2 − x_3 +2x_4 − x_5 =14 \\ &−x_1 + 4x_3 + x_4 −2x_5 = 15 \\ &−2x_1 + x_2 − 2x_3 +5x_5 = 10 \\ &x_j \geq 0, j=1,...,5. \end{align}\]
A = np.array([[3, 4, -1, 2, -1], [-1, 0, 4, 1, -2], [-2, 1, -2, 0, 5]])
b = np.array([14, 15, 10])
c = np.array([4, 3, 2, 6, 1])

(a) Verify that the matrix \(B = [A_{\bullet1} A_{\bullet3} A_{\bullet5}]\) is a feasible but not optimal basis for the above LP.

B = A[:, [0, 2, 4]]
NBV = A[:, [1, 3]]
print(f"B has full rank." if np.linalg.det(B) != 0 else f"B does not have full rank.")
B has full rank.

Since Determinant of \(B \not= 0\), by the Invertible Matrix Theorem, its column space spans \(\mathbb{R}^3\) and is therefore a basis for the LP with 3 constraints.

print(
    f"B is feasible because B^-1b >= 0."
    if np.alltrue(np.linalg.inv(B) @ b >= 0)
    else f"B is not feasible because B^-1b < 0."
)
B is feasible because B^-1b >= 0.
print(
    "Basis is optimal as reduced costs for minimization problem are <= 0!"
    if np.alltrue(c[[0, 2, 4]] @ (np.linalg.inv(B) @ NBV) - c[[1, 3]] <= 0)
    else f"Basis is not optimal as reduced costs for minimization problem are > 0!"
)
Basis is not optimal as reduced costs for minimization problem are > 0!

(b) Compute the inverse of \(\tilde{B} = [A_{\bullet2} A_{\bullet3} A_{\bullet5}]\) and verify that \(\tilde{B}\) is an optimal basis for the given LP.

B_tilde = A[:, [1, 2, 4]]
NBV_tilde = A[:, [0, 3]]
print(
    "B_tilde has full rank."
    if np.linalg.det(B_tilde) != 0
    else f"B_tilde does not have full rank."
)
B_tilde has full rank.

Since Determinant of \(\tilde{B} \not= 0\), by the Invertible Matrix Theorem, its column space spans \(\mathbb{R}^3\) and is therefore a basis for the LP with 3 constraints.

print(
    f"B_tilde is feasible because B_tilde^-1b >= 0."
    if np.alltrue(np.linalg.inv(B_tilde) @ b >= 0)
    else f"B_tilde is not feasible because B_tilde^-1b < 0."
)
B_tilde is feasible because B_tilde^-1b >= 0.
print(
    "Basis is optimal as reduced costs for minimization problem are <= 0!"
    if np.alltrue(c[[1, 2, 4]] @ (np.linalg.inv(B_tilde) @ NBV_tilde) - c[[0, 3]] <= 0)
    else f"Basis is not optimal as reduced costs for minimization problem are > 0!"
)
Basis is optimal as reduced costs for minimization problem are <= 0!

(c) Perform ranging on the right-hand side of the given LP.

For the given basis \(\tilde{B}\) to remain optimal when the RHS of the LP changes, \(\tilde{B}^{-1}b\) needs to remain \(\geq 0\):

(178)\[\begin{align} \tilde{B}^{-1}b + \tilde{B}^{-1}\delta b &\geq 0 \\ \end{align}\]
def δ_b_range(B, b):
    B_inv = np.linalg.inv(B)
    b_bar = B_inv @ b
    for row_idx in range(B_inv.shape[0]):
        δ_candidates = -b_bar / B_inv[:, row_idx]
        δ_ub, δ_lb = (
            min(δ_candidates[np.sign(B_inv[:, row_idx]) < 0]) if len(δ_candidates[np.sign(B_inv[:, row_idx]) < 0]) > 0 else np.inf,
            max(δ_candidates[np.sign(B_inv[:, row_idx]) > 0]) if len(δ_candidates[np.sign(B_inv[:, row_idx]) > 0]) > 0 else -np.inf,
        )
        if np.allclose(δ_ub, 0):
            δ_ub = 0
        if np.allclose(δ_lb, 0):
            δ_lb = 0
        if δ_ub == 0 and δ_lb == 0:
            print(f"δ_{row_idx+1} cannot be perturbed.")
        else:
            print(f"{δ_lb} <= δ_{row_idx+1} <= {δ_ub}.")

        # Feasibility Checks
        assert np.alltrue(δ_lb * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
            "δ_lb", δ_lb * np.linalg.inv(B)[:, row_idx], -b_bar
        )
        assert np.alltrue(δ_ub * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
            "δ_ub", δ_ub * np.linalg.inv(B)[:, row_idx], -b_bar
        )
δ_b_range(B_tilde, b)
-24.3125 <= δ_1 <= 52.25.
-17.476190476190474 <= δ_2 <= inf.
-13.0625 <= δ_3 <= inf.

(d) Perform ranging on the cost coefficients of the given LP.

For the given basis \(\tilde{B}\) to remain optimal when the coefficients of the objective function changes, \({c_B}^\top\tilde{B}^{-1}\tilde{NBV} - {c_j}\) needs to remain \(\leq 0\) for a minimization problem:

\(\tilde{B}^{-1}\tilde{NBV}\)

np.linalg.inv(B_tilde) @ NBV_tilde
array([[ 0.41428571,  0.55714286],
       [-0.61428571,  0.24285714],
       [-0.72857143, -0.01428571]])
np.array([0, 2, 5]) / np.array([4, 5, 6])
array([0.        , 0.4       , 0.83333333])
def c_range(A, c, basic_var_idxs, problem_type="min"):
    assert problem_type == "min" or problem_type == "max", print(
        "problem_type must be either 'min' or 'max'."
    )
    nonbasic_var_idxs = [idx for idx in range(len(c)) if idx not in basic_var_idxs]
    B, N = A[:, basic_var_idxs], A[:, nonbasic_var_idxs]
    B_inv = np.linalg.inv(B)
    for idx in range(len(c)):
        if idx in basic_var_idxs:
            RHS_basic_var_cost_coeffs = -(B_inv @ N)[list(basic_var_idxs).index(idx), :]
            LHS = (
                c[
                    [
                        basic_var_idx
                        for basic_var_idx in basic_var_idxs
                        if basic_var_idx != idx
                    ]
                ].T
                @ (B_inv @ N)[
                    [
                        i
                        for i in range(len(basic_var_idxs))
                        if i != list(basic_var_idxs).index(idx)
                    ],
                    :,
                ]
                - c[nonbasic_var_idxs]
            )
            c_lb_RHS_basic_var_cost_coeffs_idxs = np.argwhere(
                np.array(RHS_basic_var_cost_coeffs) > 0
            ).flatten()
            c_ub_RHS_basic_var_cost_coeffs_idxs = np.argwhere(
                np.array(RHS_basic_var_cost_coeffs) < 0
            ).flatten()
            c_lb_candidates = (
                (
                    LHS[c_lb_RHS_basic_var_cost_coeffs_idxs]
                    / RHS_basic_var_cost_coeffs[c_lb_RHS_basic_var_cost_coeffs_idxs]
                )
                if len(c_lb_RHS_basic_var_cost_coeffs_idxs) > 0
                else []
            )
            c_ub_candidates = (
                (
                    LHS[c_ub_RHS_basic_var_cost_coeffs_idxs]
                    / RHS_basic_var_cost_coeffs[c_ub_RHS_basic_var_cost_coeffs_idxs]
                )
                if len(c_ub_RHS_basic_var_cost_coeffs_idxs) > 0
                else []
            )
            c_lb = max(c_lb_candidates) if len(c_lb_candidates) > 0 else -np.inf
            c_ub = min(c_ub_candidates) if len(c_ub_candidates) > 0 else np.inf
            print(f"{c_lb} <= c_{idx+1} <= {c_ub}")
        else:
            print(
                f"c_{idx+1} >= {c[basic_var_idxs].T @ (B_inv @ N)[:, list(nonbasic_var_idxs).index(idx)]}"
            )
c_range(A=A, c=c, basic_var_idxs=[1, 2, 4])
c_1 >= -0.7142857142857142
-inf <= c_2 <= 9.923076923076923
-5.674418604651164 <= c_3 <= 17.88235294117647
c_4 >= 2.142857142857143
-5.470588235294117 <= c_5 <= inf

Optimality Conditions

(179)\[\begin{align} 0.41428571c_2 -0.61428571c_3 -0.72857143c_5 - c_1 \leq 0 \\ 0.55714286c_2 +0.24285714c_3 -0.01428571c_5 - c_4 \leq 0 \\ \end{align}\]
print(f"c_1 >= {c[[1, 2, 4]].T @ (np.linalg.inv(B_tilde) @ NBV_tilde)[:, 0]}")
c_1 >= -0.7142857142857142
print(
    f"c_2 <= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[1, 2], 0] @ c[[2, 4]] - c[0]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[0, 0]}"
)
c_2 <= 14.379310344827585
print(
    f"c_2 <= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[1, 2], 1] @ c[[2, 4]] - c[3]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[0, 1]}"
)
c_2 <= 9.923076923076923
print(
    f"c_3 >= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 2], 0] @ c[[1, 4]] - c[0]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[1, 0]}"
)
c_3 >= -5.674418604651164
print(
    f"c_3 <= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 2], 1] @ c[[1, 4]] - c[3]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[1, 1]}"
)
c_3 <= 17.88235294117647
print(f"c_4 >= {c[[1, 2, 4]].T @ (np.linalg.inv(B_tilde) @ NBV_tilde)[:, 1]}")
c_4 >= 2.142857142857143
print(
    f"c_5 >= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 1], 0] @ c[[1, 2]] - c[0]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[2, 0]}"
)
c_5 >= -5.470588235294117
print(
    f"c_5 >= {-((np.linalg.inv(B_tilde) @ NBV_tilde)[[0, 1], 1] @ c[[1, 2]] - c[3]) / (np.linalg.inv(B_tilde) @ NBV_tilde)[2, 1]}"
)
c_5 >= -269.00000000000017

Range for Cost Coefficients:

(180)\[\begin{align} &c_1 \geq -0.7142857142857142 \\ &c_2 \leq 9.923076923076923 \\ &-5.674418604651164 \leq c_3 \leq 17.88235294117647 \\ &c_4 \geq 2.142857142857143 \\ &c_5 \geq -5.470588235294117 \\ \end{align}\]

6.5

Solve the parametric linear program \(P_\lambda\)

(181)\[\begin{align} \text{minimize } &8x_1 + 3x_2 + 5x_3 \\ \text{subject to } &2x_1 − x_2 + 5x_3 \leq 12 + \lambda \\ &−2x_1 +4x_2 +8x_3 \geq 6 + \lambda \\ &x1,x2,x3 ≥0. \end{align}\]

where \(\lambda\) is a parameter belonging to the interval \([−15, 15]\). Organize your answer in a table which gives the optimal solution (including the objective function value) as a function of the parameter \(\lambda\).

We first split the range into 1. \(-12 \leq \lambda \leq -6\), 2. \(-15 \leq \lambda < -12\), 3. \(-6 < \lambda \leq 15\)

1st Range for \(\lambda: -12 \leq \lambda \leq -6\)

Simplex Tableau:

(182)\[\begin{align} &\begin{array}{c} \\ z \\ s_1 \\ s_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & b & \lambda \\ \hline 1 & -8 & -3 & -5 & 0 & 0 & 0 & 0 \\ \hline 0 & 2 & -1 & 5 & 1 & 0 & 12 & 1 \\ 0 & 2 & -4 & -8 & 0 & 1 & -6 & -1 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Optimal because reduced cost coefficients < 0 for min problem,

(183)\[\begin{align} z &= 0 \\ s_1 &= 12 + \lambda \geq 0 \rightarrow \lambda \geq -12 \\ s_2 &= -6 - \lambda \geq 0 \rightarrow \lambda \leq -6 \\ \end{align}\]

Hence, current basis is optimal for \(-12 \leq \lambda \leq -6\) and the optimal solution is \((x_1=0,x_2=0,x_3=0,s_1=12+\lambda, s_2=-6-\lambda)\), objective value is 0.

2nd Range for \(\lambda: -15 \leq \lambda < -12\)

Since \(12 + \lambda\) \(< 0\) in this range, we need a new basis, leaving variable \(s_1\), and entering \(x_2\) by max ratio test.

Simplex Tableau:

(184)\[\begin{align} &\begin{array}{c} \\ z \\ x_2 \\ s_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & b & \lambda \\ \hline 1 & -14 & 0 & -20 & -3 & 0 & 0 & 0 \\ \hline 0 & -2 & 1 & -5 & -1 & 0 & -12 & -1 \\ 0 & -6 & 0 & -28 & -4 & 1 & -54 & -5 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Optimal because reduced cost coefficients < 0 for min problem,

(185)\[\begin{align} z &= 0 \\ x_2 &= -12 - \lambda \geq 0 \rightarrow \lambda \leq -12 \\ s_2 &= -54 - 5\lambda \geq 0 \rightarrow \lambda \leq -54/5 \\ \end{align}\]

Hence, current basis is optimal and the optimal solution is \((x_1=0,x_2=-12-\lambda,x_3=0,s_1=0, s_2=-54-5\lambda)\).

3rd Range for \(\lambda: -6 < \lambda \leq 15\)

Since \(-6 - \lambda\) \(< 0\) in this range, we need a new basis, leaving variable \(s_2\), and entering \(x_3\) by max ratio test.

Simplex Tableau:

(186)\[\begin{align} &\begin{array}{c} \\ z \\ x_2 \\ s_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & s_1 & s_2 & b & \lambda \\ \hline 1 & -37/4 & -1/2 & 0 & 0 & -5/8 & 0 & 0 \\ \hline 0 & 13/4 & -7/2 & 0 & 1 & 5/8 & 33/4 & 3/8 \\ 0 & -1/4 & 1/2 & 1 & 0 & -1/8 & 3/4 & 1/8 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Optimal because reduced cost coefficients < 0 for min problem,

(187)\[\begin{align} z &= 0 \\ s_1 &= 33/4 + 3/8\lambda \geq 0 \rightarrow \lambda \geq -11/2 \\ x_3 &= 3/4 + 1/8\lambda \geq 0 \rightarrow \lambda \geq -6 \\ \end{align}\]

Hence, current basis is optimal and the optimal solution is \((x_1=0,x_2=0,x_3=3/4+1/8\lambda,s_1=33/4 + 3/8\lambda, s_2=0)\).

6.8

The LP of Example 3.8 is of the form commonly encountered in optimal resource allocation problems where the right-hand side represents amounts of resources and the coefficient matrix represents the how much the different activities consume of the different resources when each activity is operated at unit level. The objective function coefficients are then the contributions of the activities when operated at unit level. With this as an interpretation of the model, how much would an additional unit of each resource add to the optimal value of the objective?

Example 3.8

(188)\[\begin{align} \text{minimize } &-4x_1 - 6x_2 - 3x_3 -x_4 \\ \text{subject to } &x_1 + 4x_2 +8x_3 + 6x_4 \leq 11 \\ &4x_1 + x_2 + 2x_3 +x_4 \leq 7 \\ &2x_1 + 3x_2 + x_3 + 2x_4 \leq 2 \\ &x_1, x_2, x_3, x_4 \geq 0 \\ \end{align}\]
c = np.array([-4, -6, -3, -1])
A = np.array([[1,4,8,6], [4,1,2,1], [2,3,1,2]])
b = np.array([11, 7, 2])
# Create one vector optimization variable.
x = cp.Variable((4,), integer=False)

# Create constraints.
constraints = [
    A @ x <= b,
    x >= 0,
]

# Form objective.
obj = cp.Minimize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: -5.33
The optimal solution is: x = [0.33, 0.0, 1.33, 0.0]

We can determine how much each additional unit of resource adds to the objective by examining the dual variables.

constraints[0].dual_value
array([0.13333333, 0.        , 1.93333333])

Hence, adding 1 unit to the 1st constraint increases the objective value by 1/3, adding 1 unit to the 2nd constraint increases the objective value by 0, adding 1 unit to the 3rd constraint increases the objective value by 29/15.

6.13

Consider the linear program \(P(\lambda)\)

(189)\[\begin{align} \text{minimize } &3x_1 + (4+2\lambda)x_2 + 2x_3 − x_4 + (1−3\lambda)x_5 \\ \text{subject to } &5x_1 + 3x_2 +2x_3 −2x_4 + x_5 =20 \\ &x_1+ x_2+ x_3− x_4+ x_5= 8 \\ &x_j \geq 0, j=1,...,5. \\ \text{where } &\lambda ≤ −1/4. \\ \end{align}\]
(190)\[\begin{align} A &= \begin{bmatrix} 5 & 3 & 2 & -2 & 1 \\ 1 & 1 & 1 & -1 & 1 \\ \end{bmatrix} \\ b &= \begin{bmatrix} 20 \\ 8 \\ \end{bmatrix} \\ c &= \begin{bmatrix} 3 & 4 & 2 & -1 & 1 \\ \end{bmatrix} \\ d &= \begin{bmatrix} 0 & 2 & 0 & 0 & -3 \\ \end{bmatrix} \\ \end{align}\]
(191)\[\begin{align} \text{minimize } &(c + \lambda d)^\top x \\ \text{subject to } &Ax = b \\ &x \geq 0 \\ \end{align}\]
A = np.array([[5, 3, 2, -2, 1], [1, 1, 1, -1, 1]])
c = np.array([3, 4, 2, -1, 1])
d = np.array([0, 2, 0, 0, -3])
b = np.array([20, 8])

(a) Show that \(B = [A_{\bullet2} A_{\bullet3}]\) is a feasible basis for this problem.

B = A[:, [1, 2]]
NBV = A[:, [0, 3, 4]]
print(f"B has full rank." if np.linalg.det(B) != 0 else f"B does not have full rank.")
B has full rank.

Since Determinant of \(B \not= 0\), by the Invertible Matrix Theorem, its column space spans \(\mathbb{R}^2\) and is therefore a basis for the LP with 2 constraints.

print(
    f"B is feasible because B^-1b >= 0."
    if np.alltrue(np.linalg.inv(B) @ b >= 0)
    else f"B is not feasible because B^-1b < 0."
)
B is feasible because B^-1b >= 0.

(b) For what range of \(\lambda\) values is \(B\) an optimal basis for \(P(\lambda)\)?

For \(B\) to be an optimal basis for \(P(\lambda)\), we need \({c_B}^\top B^{-1} N - c_N \leq 0\) for minimization problem:

(192)\[\begin{align} \begin{bmatrix} (4 + 2\lambda) & 2 \\ \end{bmatrix} \begin{bmatrix} 1 & -2 \\ -1 & 3 \\ \end{bmatrix} \begin{bmatrix} 5 & -2 & 1 \\ 1 & -1 & 1 \\ \end{bmatrix} - \begin{bmatrix} 3 & -1 & (1 - 3\lambda) \\ \end{bmatrix} &\leq 0 \\ \begin{bmatrix} (8 + 6\lambda) & -2 & -2\lambda \\ \end{bmatrix} - \begin{bmatrix} 3 & -1 & (1 - 3\lambda) \\ \end{bmatrix} &\leq 0 \\ \begin{bmatrix} (5+6\lambda) & -1 & (-3 + 3\lambda) \\ \end{bmatrix} &\leq 0 \\ \lambda &\leq -5/6 \\ -1 &\leq 0 \\ \lambda &\leq 1 \\ \end{align}\]

Hence, \(\lambda \leq -5/6\).

(c) Solve this parametric linear programming problem for all \(\lambda \leq −1/4\). That is, find the set of values of the parameter \(\lambda\) for which \(P(\lambda)\) has an optimal solution, and for all such values, find an optimal point \(\bar{x}(\lambda)\) and the corresponding optimal objective value \(\bar{z}(\lambda)\).

We first split the range for \(\lambda\) into 1. \(\lambda \leq -5/6\) and 2. \(-5/6 < \lambda \leq -1/4\).

Simplex Tableau:

(193)\[\begin{align} &\begin{array}{c} \\ z \\ ? \\ ? \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 1 & -3 & -(4+2\lambda) & -2 & 1 & -(1−3\lambda) & 0 \\ \hline &5 & 3 & 2 & −2 & 1 & 20 \\ &1 & 1 & 1 & -1 & 1 & 8 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Simplex Tableau with feasible basis \(B = [A_{\bullet2} A_{\bullet3}]\):

(194)\[\begin{align} &\begin{array}{c} \\ z \\ x_2 \\ x_3 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 1 & 5+6\lambda & 0 & 0 & -1 & -1+\lambda & 0 \\ \hline 0 & 3 & 1 & 0 & 0 & -1 & 4 \\ 0 & -2 & 0 & 1 & -1 & 2 & 4 \\ \end{array} \end{bmatrix} \\ \end{align}\]

For the first range of \(\lambda\), \(\lambda \leq -5/6\), the reduced cost coefficients \(5+6\lambda < 0\) and \(-1+\lambda < 0\), meaning that the solution \((x_1=0, x_2=4, x_3=4, x_4=0, x_5=0)\) is optimal for a minimization problem and our objective is \(24+8\lambda\).

However, for the 2nd range of \(\lambda\), \(-5/6 < \lambda \leq -1/4\), the reduced cost coefficients \(5+6\lambda > 0\) and \(-1+\lambda < 0\), which means that the feasible basis isnt optimal anymore, we will change basis to \(B = [A_{\bullet1} A_{\bullet3}]\) because min ratio test chooses leaving variable \(x_2\) and we enter \(x_1\):

Simplex Tableau with feasible basis \(B = [A_{\bullet1} A_{\bullet3}]\):

(195)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_3 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 1 & 0 & -(5+6\lambda) / 3 & 0 & 1 & (2+9\lambda)/3 & 0 \\ \hline 0 & 1 & 1/3 & 0 & 0 & -1/3 & 4/3 \\ 0 & 0 & 0 & 1 & -1 & 4/3 & 20/3 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Hence, \(-5/6 < \lambda \leq -1/4\), our reduced cost coefficients are \(\leq 0\) for a minimization and hence optimal. Our optimal solution tghen becomes \((x_1=4/3,x_2=0,x_3=20/3,x_4=0, x_5=0)\), and objective value is \(52/3\).

6.16

Let

\[\begin{equation*} A = \begin{bmatrix} 3 & 2 & 1 & 0 & 4 & −1 \\ 2 & 4 & 2 & −4 & 3 & 2 \\ 1 & 2 & 3 & 4 & 5 & 6 \\ 0 & −4 & 4 & 5 & −2 & 1 \\ \end{bmatrix} \,\,,\,\, b = \begin{pmatrix} 11, & 22, & 21, &8 \\ \end{pmatrix} \,\,,\,\, c = \begin{pmatrix} 11, & 14, & 17, & 1, & 26, & 20 \\ \end{pmatrix} \end{equation*}\]

The first four columns of the matrix \(A\) constitute an optimal basis \(B\) in the linear program

(196)\[\begin{align} \text{minimize } &c^\top x \\ \text{subject to } &Ax = b \\ &x \geq 0. \\ \end{align}\]

Dual

(197)\[\begin{align} \text{maximize } &b^\top y \\ \text{subject to } &A^\top y \leq c \\ &y \text{ free}. \\ \end{align}\]
A = np.array(
    [[3, 2, 1, 0, 4, -1], [2, 4, 2, -4, 3, 2], [1, 2, 3, 4, 5, 6], [0, -4, 4, 5, -2, 1]]
)
B = A[:, :4]
b = np.array([11, 22, 21, 8])
c = np.array([11, 14, 17, 1, 26, 20])
# Create one vector optimization variable.
x = cp.Variable((6,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Minimize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 127.0
The optimal solution is: x = [0.0, 3.0, 5.0, -0.0, 0.0, 0.0]

(a) Find the inverse of \(B\).

np.linalg.inv(B)
array([[ 0.44736842, -0.09210526, -0.15789474,  0.05263158],
       [-0.09210526,  0.02631579,  0.22368421, -0.15789474],
       [-0.15789474,  0.22368421,  0.02631579,  0.15789474],
       [ 0.05263158, -0.15789474,  0.15789474, -0.05263158]])

(b) Find the basic solution corresponding to B and verify that it is a degenerate optimal solution of the LP.

x = np.zeros(c.shape)
x[:4] = np.linalg.inv(B) @ b
print(
    f"Basic Solution: {['x_' + str(i+1) + '=' + str(np.round(x_i, 2)) for i, x_i in enumerate(x)]}"
)
Basic Solution: ['x_1=0.0', 'x_2=3.0', 'x_3=5.0', 'x_4=0.0', 'x_5=0.0', 'x_6=0.0']

Since the primal basic variables \(x_1, x_2, x_3, x_4\) corresponding to basis contain 0 for their values, it means that the LP is degenerate, there were redundant constraints.

(c) Find the optimal solution of the dual problem.

By Complementary Slackness, we know that dual slack variables \(t_k, k \in [1, 2, 3, 4, 5, 6]\) will be:

(198)\[\begin{align} t_1 &\not= 0 \because x_1 = 0 \\ t_2 &= 0 \because x_2 \not= 0 \\ t_3 &= 0 \because x_3 \not= 0 \\ t_4 &\not= 0 \because x_4 = 0 \\ t_5 &\not= 0 \because x_5 = 0 \\ t_6 &\not= 0 \because x_6 = 0 \\ \end{align}\]

Plugging values into the dual constraints:

(199)\[\begin{align} 3y_1 + 2y_2 + y_3 + 0y_4 + t_1 &= 11 \\ 2y_1 + 4y_2 + 2y_3 + -4y_4 &= 14 \\ y_1 + 2y_2 + 3y_3 + 4y_4 &= 17 \\ 0y_1 - 4y_2 + 4y_3 + 5y_4 + t_4 &= 1 \\ 4y_1 + 3y_2 + 5y_3 - 2y_4 + t_5 &= 26 \\ -y_1 + 2y_2 + 6y_3 + y_4 + t_6 &= 20 \\ \end{align}\]

Dual Optimal Solution:

(200)\[\begin{align} y &= C_B B^{-1} \end{align}\]
c[0:4]@np.linalg.inv(B)
array([1., 3., 2., 1.])

\(\therefore\) Dual Optimal Solution \(\bar{y}\):

(201)\[\begin{align} y_1 &= 1 \\ y_2 &= 3 \\ y_3 &= 2 \\ y_4 &= 1 \\ \end{align}\]

(d) For each of the components of the right-hand side vector b determine whether (positive or negative) perturbations of that component yield a feasible solution of the LP. (Hint: See Section 6.2.)

RHS, Non-zero basic solution

For basis to remain feasible, \(\bar{b} + B^{-1}\delta b \geq 0\)

def δ_b_range(B, b):
    B_inv = np.linalg.inv(B)
    b_bar = B_inv @ b
    for row_idx in range(B_inv.shape[0]):
        δ_candidates = -b_bar / B_inv[:, row_idx]
        δ_ub, δ_lb = (
            min(δ_candidates[np.sign(B_inv[:, row_idx]) < 0]),
            max(δ_candidates[np.sign(B_inv[:, row_idx]) > 0]),
        )
        if np.allclose(δ_ub, 0):
            δ_ub = 0
        if np.allclose(δ_lb, 0):
            δ_lb = 0
        if δ_ub == 0 and δ_lb == 0:
            print(f"δ_{row_idx+1} cannot be perturbed.")
        else:
            print(f"{δ_lb} <= δ_{row_idx+1} <= {δ_ub}.")

        # Feasibility Checks
        assert np.alltrue(δ_lb * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
            "δ_lb", δ_lb * np.linalg.inv(B)[:, row_idx], -b_bar
        )
        assert np.alltrue(δ_ub * np.linalg.inv(B)[:, row_idx] >= -b_bar), print(
            "δ_ub", δ_ub * np.linalg.inv(B)[:, row_idx], -b_bar
        )
δ_b_range(B, b)
0 <= δ_1 <= 31.666666666666668.
-22.352941176470587 <= δ_2 <= 0.
δ_3 cannot be perturbed.
δ_4 cannot be perturbed.

(e) What does all this say about giving an economic interpretation to the shadow prices in this LP?

c[0:4] @ np.linalg.inv(B)
array([1., 3., 2., 1.])

The dual variables (shadow prices) can be interpreted as how much the objective value will increase if the RHS of the primal constraint is increased. E.g. if \(b_1 = 11 + (\delta_1=1)\) instead of \(11\) (possible because \(\delta_1=1\) is within the range of pertubation as shown in the previous part), the objective will increase by 1. By complementary slackness, if the dual variable is 0, it means that the primal constraint corresponding to that is not (active / binding). If the dual variable is \(> 0\), it means that the constraint is (active / binding).

(f) Investigate what can be said about shadow prices in other LPs having degenerate optimal basic solutions such that only one basic variable has value zero?

The shadow price will be 0 for that basic variable whose value is 0.


Extra Question

Consider the following linear program:

(202)\[\begin{align} \text{maximize } &2x_1 + x_2 − x_3 \\ \text{subject to } &x_1 + 2x_2 + x_3 \leq 8 \\ &−x_1 + x_2 − 2x_3 \leq 4 \\ &\text{and } x_1, x_2, x_3 \geq 0 \\ \end{align}\]

Parts (b) through (e) below are independent of each other.

Standard Form

(203)\[\begin{align} \text{maximize } &2x_1 + x_2 − x_3 \\ \text{subject to } &x_1 + 2x_2 + x_3 + x_4 = 8 \\ &−x_1 + x_2 − 2x_3 + x_5 = 4 \\ &\text{and } x_1, x_2, x_3, x_4, x_5 \geq 0 \\ \end{align}\]
A = np.array([[1, 2, 1, 1, 0], [-1, 1, -2, 0, 1]])
b = np.array([8, 4])
c = np.array([2, 1, -1, 0, 0])

(a) You are given that an optimal solution of the problem has \(x_1\) and \(x_5\) as the basic variables. Compute this solution.

Final Tableau for Optimal Solution:

(204)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 16 & 0 & 3 & 3 & 2 & 0 & - \\ \hline - & 1 & 2 & 1 & 1 & 0 & 8 \\ - & 0 & 3 & -1 & 1 & 1 & 12 \\ \end{array} \end{bmatrix} \\ \end{align}\]
np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])
(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]) - c[[1, 2, 3]]
array([3., 3., 2.])

Since reduced cost coefficients are >= 0 for maximization problem, solution is optimal.

Optimal Solution: \((x_1=8, x_2=0, x_3=0, x_4=0, x_5=12)\)

# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Maximize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 16.0
The optimal solution is: x = [8.0, 0.0, -0.0, 0.0, 12.0]

(b) Using sensitivity analysis, find a new optimal solution if the coefficient of \(x_2\) in the objective function is changed from 1 to 5.

A = np.array([[1, 2, 1, 1, 0], [-1, 1, -2, 0, 1]])
b = np.array([8, 4])
c = np.array([2, 5, -1, 0, 0])

\({c_B}^\top B^{-1} N - c_N\)

(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]) - c[[1, 2, 3]]
array([-1.,  3.,  2.])

Since reduced cost coefficients are not >= 0 for maximization problem, solution is not optimal anymore.

\(B^{-1}N\)

np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]
array([[ 2.,  1.,  1.],
       [ 3., -1.,  1.]])

\(B^{-1}b\)

np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])

\({c_B}^\top B^{-1}b\)

c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ b
16.0
(205)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 16 & 0 & -1 & 3 & 2 & 0 & - \\ \hline - & 1 & 2 & 1 & 1 & 0 & 8 \\ - & 0 & 3 & -1 & 1 & 1 & 12 \\ \end{array} \end{bmatrix} \\ \end{align}\]

\(x_2\) is entering variable, \(x_5\) is chosen as a leaving variable by min ratio test.

(206)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 20 & 0 & 0 & 8/3 & 7/3 & 1/3 & - \\ \hline - & 1 & 0 & 5/3 & 1/3 & -2/3 & 0 \\ - & 0 & 1 & -1/3 & 1/3 & 1/3 & 4 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Since reduced cost coefficients are >= 0 for max problem, and RHS > 0, new optimal solution: \((x_1=0, x_2=4, x_3=0, x_4=0, x_5=0)\)

# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Maximize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 20.0
The optimal solution is: x = [0.0, 4.0, -0.0, 0.0, 0.0]

(c) Suppose that the coefficient of \(x_2\) in the first constraint is changed from 2 to 1/6. Using sensitivity analysis, find a new optimal solution.

A = np.array([[1, 1 / 6, 1, 1, 0], [-1, 1, -2, 0, 1]])
b = np.array([8, 4])
c = np.array([2, 1, -1, 0, 0])

\({c_B}^\top B^{-1} N - c_N\)

(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]) - c[[1, 2, 3]]
array([-0.66666667,  3.        ,  2.        ])

Since reduced cost coefficients are not >= 0 for maximization problem, solution is not optimal anymore.

\(B^{-1}N\)

np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3]]
array([[ 0.16666667,  1.        ,  1.        ],
       [ 1.16666667, -1.        ,  1.        ]])

\(B^{-1}b\)

np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])

\({c_B}^\top B^{-1}b\)

c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ b
16.0
(207)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 16 & 0 & -14/3 & 3 & 2 & 0 & - \\ \hline - & 1 & 1/6 & 1 & 1 & 0 & 8 \\ - & 0 & 7/6 & -1 & 1 & 1 & 12 \\ \end{array} \end{bmatrix} \\ \end{align}\]

\(x_2\) is entering variable, \(x_5\) is chosen as a leaving variable by min ratio test.

(208)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline - & 0 & 0 & -3/14 & 18/14 & 6/7 & - \\ \hline - & -6 & 0 & 48/7 & 36/7 & -6/7 & 264/7 \\ - & 0 & 1 & -6/7 & 6/7 & 6/7 & 72/7 \\ \end{array} \end{bmatrix} \\ \end{align}\]
(209)\[\begin{align} \text{More Pivot Operations} \vdots \\ \end{align}\]
(210)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline 22.86 & 0 & 0 & 2.43 & 2.57 & 0.57 & - \\ \hline - & 1 & 0 & 1.14 & 0.86 & -0.14 & 6.29 \\ - & 0 & 1 & -0.86 & 0.86 & 0.86 & 10.29 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Since reduced cost coefficients are >= 0 for max problem, and RHS > 0, new optimal solution: \((x_1=6.29, x_2=10.29, x_3=0, x_4=0, x_5=0)\)

Double Check:

# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Maximize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 22.86
The optimal solution is: x = [6.29, 10.29, 0.0, 0.0, 0.0]

\({c_B}^\top B^{-1} N - c_N\)

(c[[0, 1]].T @ np.linalg.inv(A[:, [0, 1]]) @ A[:, [2, 3, 4]]) - c[[2, 3, 4]]
array([2.42857143, 2.57142857, 0.57142857])

Since reduced cost coefficients are >= 0 for maximization problem, solution is optimal.

\(B^{-1}N\)

np.linalg.inv(A[:, [0, 1]]) @ A[:, [2, 3, 4]]
array([[ 1.14285714,  0.85714286, -0.14285714],
       [-0.85714286,  0.85714286,  0.85714286]])

\(B^{-1}b\)

np.linalg.inv(A[:, [0, 1]]) @ b
array([ 6.28571429, 10.28571429])

\({c_B}^\top B^{-1}b\)

c[[0, 1]].T @ np.linalg.inv(A[:, [0, 1]]) @ b
22.857142857142858

(d) Suppose that the following constraint is added to the problem: \(x_2 + 2x_3 = 3\). Using sensitivity analysis, find the new optimal solution.

A = np.array([[1, 2, 1, 1, 0], [-1, 1, -2, 0, 1], [0, 1, 2, 0, 0]])
b = np.array([8, 4, 3])
c = np.array([2, 1, -1, 0, 0])

Optimal Solution: \((x_1=8, x_2=0, x_3=0, x_4=0, x_5=12)\)

\(0 + 2(0) \not= 3, \therefore \) Current optimal solution is infeasible. Hence, we have to add the new constraint and resolve the system using the Dual Simplex.

(211)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ x_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline - & 0 & 3 & 3 & 2 & 0 & - \\ \hline - & 1 & 2 & 1 & 1 & 0 & 8 \\ - & 0 & 3 & -1 & 1 & 1 & 12 \\ - & 0 & -1 & -2 & 0 & 0 & -3 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Make \(x_2\) the pivot by min ratio test on 3rd row:

(212)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ x_2 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline - & 0 & 0 & -3 & 2 & 0 & - \\ \hline - & 1 & 0 & -3 & 1 & 0 & 2 \\ - & 0 & 0 & -7 & 1 & 1 & 3 \\ - & 0 & 1 & 2 & 0 & 0 & 3 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Make \(x_3\) entering variable, \(x_2\) leaving:

(213)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ x_3 \\ \end{array} \begin{bmatrix} \begin{array}{c|ccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & b \\ \hline - & 0 & 3/2 & 0 & 2 & 0 & - \\ \hline - & 1 & 3/2 & 0 & 1 & 0 & 13/2 \\ - & 0 & 7/2 & 0 & 1 & 1 & 27/2 \\ - & 0 & 1/2 & 1 & 0 & 0 & 3/2 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Since reduced cost coefficients >= 0 for max problem [OPTIMAL] and RHS Basic Variables >= 0 [FEASIBLE], dual simplex terminates.

Optimal solution: \((x_1=6.5, x_2=0, x_3=1.5, x_4=0, x_5=13.5)\)

Double Check:

# Create one vector optimization variable.
x = cp.Variable((5,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Maximize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 11.5
The optimal solution is: x = [6.5, 0.0, 1.5, 0.0, 13.5]

(e) Suppose that a new activity \(x_6\) is proposed with unit return 6 and consumption vector (2, 1). Using sensitivity analysis, find a new optimal solution.

A = np.array([[1, 2, 1, 1, 0, 2], [-1, 1, -2, 0, 1, 1]])
b = np.array([8, 4])
c = np.array([2, 1, -1, 0, 0, 6])

\({c_B}^\top B^{-1} N - c_N\)

(c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3, 5]]) - c[[1, 2, 3, 5]]
array([ 3.,  3.,  2., -2.])

Since reduced cost coefficients are not all >= 0 for maximization problem, solution is not optimal anymore, we might want to produce some of activity 6.

\(B^{-1}N\)

np.linalg.inv(A[:, [0, 4]]) @ A[:, [1, 2, 3, 5]]
array([[ 2.,  1.,  1.,  2.],
       [ 3., -1.,  1.,  3.]])

\(B^{-1}b\)

np.linalg.inv(A[:, [0, 4]]) @ b
array([ 8., 12.])

\({c_B}^\top B^{-1}b\)

c[[0, 4]].T @ np.linalg.inv(A[:, [0, 4]]) @ b
16.0
(214)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_5 \\ \end{array} \begin{bmatrix} \begin{array}{c|cccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & b \\ \hline 16 & 0 & 3 & 3 & 2 & 0 & -2 & - \\ \hline - & 1 & 2 & 1 & 1 & 0 & 2 & 8 \\ - & 0 & 3 & -1 & 1 & 1 & 3 & 12 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Choose \(x_6\) as entering variable, and \(x_5\) as leaving variable by min ratio test in normal simplex method and break tie by choosing slack var,

(215)\[\begin{align} &\begin{array}{c} \\ z \\ x_1 \\ x_6 \\ \end{array} \begin{bmatrix} \begin{array}{c|cccccc|c} z & x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & b \\ \hline 16 & 0 & 2 & 7/3 & 8/3 & 2/3 & -2 & - \\ \hline - & 1 & 0 & 5/3 & 1/3 & -2/3 & 0 & 0 \\ - & 0 & 1 & -1/3 & 1/3 & 1/3 & 1 & 4 \\ \end{array} \end{bmatrix} \\ \end{align}\]

Since reduced cost coefficients >= 0 for max problem [OPTIMAL] and RHS Basic Variables >= 0 [FEASIBLE], dual simplex terminates.

Optimal solution: \((x_1=0, x_2=0, x_3=0, x_4=0, x_5=4)\)

Double Check:

# Create one vector optimization variable.
x = cp.Variable((6,), integer=False)

# Create constraints.
constraints = [
    A @ x == b,
    x >= 0,
]

# Form objective.
obj = cp.Maximize(c.T @ x)

# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve()

print("Linear Programming Solution")
print("=" * 30)
print(f"Status: {prob.status}")
print(f"The optimal value is: {np.round(prob.value, 2)}")
print(f"The optimal solution is: x = {[np.round(x_i, 2) for x_i in x.value]}")
Linear Programming Solution
==============================
Status: optimal
The optimal value is: 24.0
The optimal solution is: x = [0.0, 0.0, -0.0, 0.0, 0.0, 4.0]