In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a corresponding variable to assume a positive value in the optimal solution. It is the cost for increasing a variable by a small amount, i.e., the first derivative from a certain point on the polyhedron that constrains the problem. When the point is a vertex in the polyhedron, the variable with the most extreme cost, negatively for minimization and positively maximization, is sometimes referred to as the steepest edge.
Given a system minimize subject to , the reduced cost vector can be computed as , where is the dual cost vector.
It follows directly that for a minimization problem, any non-basic variables at their lower bounds with strictly negative reduced costs are eligible to enter that basis, while any basic variables must have a reduced cost that is exactly 0. For a maximization problem, the non-basic variables at their lower bounds that are eligible for entering the basis have a strictly positive reduced cost.
Video Reduced cost
Interpretation
For the case where x and y are optimal, the reduced costs can help explain why variables attain the value they do. For each variable, the corresponding sum of that stuff gives the reduced cost show which constraints forces the variable up and down. For non-basic variables, the distance to zero gives the minimal change in the object coefficient to change the solution vector x.
Maps Reduced cost
Reduced costs in pivot strategy
In principle, a good pivot strategy would be to select whichever variable has the greatest reduced cost. However, the steepest edge might ultimately not be the most attractive, as the edge might be very short, thus affording only a small betterment of the object function value. From a computational view, another problem is that to compute the steepest edge, an inner product must be computed for every variable in the system, making the computational cost too high in many cases. The Devex algorithm attempts to overcome the latter problem by estimating the reduced costs rather than calculating them at every pivot step, exploiting that a pivot step might not alter the reduced costs of all variables dramatically.
Reduced Cost in Linear Programming
Associated with each variable is a reduced cost value. However, the reduced cost value is only non-zero when the optimal value of a variable is zero. A somewhat intuitive way to think about the reduced cost variable is to think of it as indicating how much the cost of the activity represented by the variable must be reduced before any of that activity will be done. More precisely,
... the reduced cost value indicates how much the objective function coefficient on the corresponding variable must be improved before the value of the variable will be positive in the optimal solution.
In the case of a minimization problem, "improved" means "reduced". So, in the case of a cost-minimization problem, where the objective function coefficients represent the per-unit cost of the activities represented by the variables, the "reduced cost" coefficients indicate how much each cost coefficient would have to be reduced before the activity represented by the corresponding variable would be cost-effective. In the case of a maximization problem, "improved" means "increased". In this case, where, for example, the objective function coefficient might represent the net profit per unit of the activity. The reduced cost value indicates how much the profitability of the activity would have to be increased in order for the activity to occur in the optimal solution. The units of the reduced-cost values are the same as the units of the corresponding objective function coefficients.
If the optimal value of a variable is positive (not zero), then the reduced cost is always zero. If the optimal value of a variable is zero and the reduced cost corresponding to the variable is also zero, then there is at least one other corner that is also in the optimal solution. The value of this variable will be positive at one of the other optimal corners.
See also
- Linear programming
- Shadow price
References
Source of the article : Wikipedia