Linear Programming Computation pp 101–121 Cite as

## Duality Principle and Dual Simplex Method

- Ping-Qi PAN 2
- First Online: 10 October 2013

3459 Accesses

The duality features a special relationship between a LP problem and another, both of which involve the same original data ( A , b , c ), located differently (except for the self-duality, see below). The former is referred to as primal problem while the latter as dual problem . It is important that there exists a close relationship between their feasible regions, optimal solutions and optimal values. The duality together with optimality conditions, yielding from it, constitute a basis for the LP theory. On the other hand, an economic interpretation of duality features its applications to practice. This chapter is devoted to these topics.

- Dual Problem
- Duality Features
- Simplex Tableau
- Dual Basic Feasible Solution
- Tableau Version

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution .

## Buying options

- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Compact, lightweight edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Paul Samuelson (1915–2009), American economist, the winner of The Nobel Economics Prize (1970), the first American winning this prize.

Abadie J, Corpentier J (1969) Generalization of the Wolfe reduced gradient method to the case of mon-linear constrained optimization. In: Fletcher R (ed) Optimization. Academic, London, pp 37–48

Google Scholar

Abel P (1987) On the choice of the pivot columns of the simplex method: gradient criteria. Computing 38:13–21

MATH MathSciNet Google Scholar

Adler I, Megiddo N (1985) A simplex algorithm whose average number of steps is bounded between two quadratic functions of the smaller dimension. J ACM 32:871–895

Adler I, Resende MGC, Veige G, Karmarkar N (1989) An implementation of Karmarkar’s algorithm for linear programming. Math Program 44:297–335

MATH Google Scholar

Andersen E, Andersen K (1995) Presolving in linear programming. Math Program 71:221–245

Andersen ED, Gondzio J, Mészáros C, Xu X (1996) Implementation of interior-point methods for large scale linear programming. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer, Dordrecht

Andrel N, Barbulescu M (1993) Balance constraints reduction of large-scale linear programming problems. Ann Oper Res 43:149–170

MathSciNet Google Scholar

Anstreicher KM, Watteyne P (1993) A family of search directions for Karmarkar’s algorithm. Oper Res 41:759–767

Arrow KJ, Hurwicz L (1956) Reduction of constrained maxima to saddle-point problems. In: Neyman J (ed) Proceedings of the third Berkeley symposium on mathematical statistics and probability, vol 5. University of California Press, Berkeley, pp 1–26

Avis D, Chvatal V (1978) Notes on Bland’s pivoting rule. Math Program 8:24–34

Balas E (1965) An additive algorithm for solving linear programs with zero-one variables. Oper Res 13:517–546

Balinski ML, Gomory RE (1963) A mutual primal-dual simplex method. In: Graves RL, Wolfe P (eds) Recent advances in mathematical programming. McGraw-Hill, New York

Balinski ML, Tucker AW (1969) Duality theory of linear problems: a constructive approach with applications. SIAM Rev 11:347–377

Barnes ER (1986) A variation on Karmarkars algorithm for solving linear programming problems. Math Program 36:174–182

Bartels RH (1971) A stabilization of the simplex method. Numer Math 16:414–434

Bartels RH, Golub GH (1969) The simplex method of linear programming using LU decomposition. Commun ACM 12:266–268

Bartels RH, Stoer J, Zenger Ch (1971) A realization of the simplex method based on triangular decompositions. In: Wilkinson JH, Reinsch C (eds) Contributions I/II in handbook for automatic computation, volume II: linear algebra. Springer, Berlin/London

Bazaraa MS, Jarvis JJ, Sherali HD (1977) Linear programming and network flows, 2nd edn. Wiley, New York

Beale EML (1954) An alternative method for linear programming. Proc Camb Philos Soc 50:513–523

Beale EML (1955) Cycling in the dual simplex algorithm. Nav Res Logist Q 2:269–275

Beale E (1968) Mathematical programming in practice. Topics in operations research. Pitman & Sons, London

Benders JF (1962) Partitioning procedures for solving mixed-variables programming problems. Numer Math 4:238–252

Benichou MJ, Cautier J, Hentges G, Ribiere G (1977) The efficient solution of large scale linear programming problems. Math Program 13:280–322

Bixby RE (1992) Implementing the simplex method: the initial basis. ORSA J Comput 4:287–294

Bixby RE (1994) Progress in linear programming. ORSA J Comput 6:15–22

Bixby RE (2002) Solving real-world linear problems: a decade and more of progress. Oper Res l50:3–15

Bixby RE, Saltzman MJ (1992) Recovering an optimal LP basis from the interior point solution. Technical report 607, Dapartment of Mathematical Sciences, Clemson University, Clemson

Bixby RE, Wagner DK (1987) A note on detecting simple redundancies in linear systems. Oper Res Lett 6:15–17

Bixby RE, Gregory JW, Lustig IJ, Marsten RE, Shanno DF (1992) Very large-scale linear programming: a case study in combining interior point and simplex methods. Oper Res 40:885–897

Björck A, Plemmons RJ, Schneider H (1981) Large-scale matrix problems. North-Holland, Amsterdanm

Bland RG (1977) New finite pivoting rules for the simplex method. Math Oper Res 2:103–107

Borgwardt K-H (1982a) Some distribution-dependent results about the asymptotic order of the arerage number of pivot steps of the simplex method. Math Oper Res 7:441–462.

Borgwardt K-H (1982b) The average number of pivot steps required by the simplex method is polynomial. Z Oper Res 26:157–177

Botsaris CA (1974) Differential gradient methods. J Math Anal Appl 63:177–198

Breadley A, Mitra G, Williams HB (1975) Analysis of mathematica problems prior to applying the simplex algorithm. Math Program 8:54–83

Brown AA, Bartholomew-Biggs MC (1987) ODE vs SQP methods for constrained optimization. Technical report, 179, The Numerical Center, Hatfield Polytechnic

Carolan MJ, Hill JE, Kennington JL, Niemi S, Wichmann SJ (1990) An empirical evaluation of the KORBX algorithms for military airlift applications. Oper Res 38:240–248

Cavalier TM, Soyster AL (1985) Some computational experience and a modification of the Karmarkar algorithm. ISME working paper, The Pennsylvania State University, pp 85–105

Chan TF (1985) On the existence and computation of LU factorizations with small pivots. Math Comput 42:535–548

Chang YY (1979) Least index resolution of degeneracy in linear complementarity problems. Technical Report 79–14, Department of OR, Stanford University

Charnes A (1952) Optimality and degeneracy in linear programming. Econometrica 20:160–170

Cheng MC (1987) General criteria for redundant and nonredundant linear inequalities. J Optim Theory Appl 53:37–42

Chvatal V (1983) Linear programming. W.H. Freeman, New York

Cipra BA (2000) The best of the 20th century: editors name top 10 algorithms. SIAM News 33: 1–2

Coleman TF, Pothen A (1987) The null space problem II. Algorithms. SIAM J Algebra Discret Methods 8:544–562

Cook AS (1971) The complexity of theorem-proving procedure. In: Proceedings of third annual ACM symposium on theory of computing, Shaker Heights, 1971. ACM, New York, pp 151–158

Cottle RW, Johnson E, Wets R (2007) George B. Dantzig (1914–2005). Not AMS 54:344–362

CPLEX ILOG (2007) 11.0 User’s manual. ILOG SA, Gentilly, France

Curtis A, Reid J (1972) On the automatic scaling of mtrices for Gaussian elimination. J Inst Math Appl 10:118–124

Dantzig GB (1948) Programming in a linear structure, Comptroller, USAF, Washington, DC

Dantzig GB (1951a) Programming of interdependent activities, mathematical model. In: Koopmas TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 19–32; Econometrica 17(3/4):200–211 (1949)

Dantzig GB (1951b) Maximization of a linear function of variables subject to linear inequalities. In: Koopmans TC (ed) Activity analysis of production and allocation. Wiley, New York, pp 339–347

Dantzig GB (1951c) A proof of the equivalence of the programming problem and the game problem. In: Koopmans T (ed) Activity analysis of production and allocation. Wiley, New York, pp 330–335

Dantzig GB (1963) Linear programming and extensions. Princeton University Press, Princeton

Dantzig GB (1991) linear programming. In: Lenstra JK, Rinnooy Kan AHG, Schrijver A (eds) History of mathematical programming. CWI, Amsterdam, pp 19–31

Dantzig GB, Ford LR, Fulkerson DR (1956) A primal-dual algorithm for linear programs. In: Kuhn HK, Tucker AW (eds) Linear inequalities and related systems. Princeton University Press, Princeton, pp 171–181

Dantzig GB, Orchard-Hays W (1953) Alternate algorithm for the revised simplex method using product form for the inverse. Notes on linear programming: part V, RM-1268. The RAND Corporation, Santa Monica

Dantzig GB, Orchard-Hayes W (1954) The product form for the inverse in the simplex method. Math Tables Other Aids Comput 8:64–67

Dantzig GB, Thapa MN (1997) Linear programming 1: introduction. Springer, New York

Dantzig GB, Thapa MN (2003) Linear programming 2: theory and extensions. Springer, New York

Dantzig GB, Wolfe P (1960) Decomposition principle for linear programs. Oper Res 8:101–111

Dantzig GB, Orden A, Wolfe P (1955) The generalized simplex method for minimizing a linear form under linear inequality constraints. Pac J Math 5:183–195

de Ghellinck G, Vial J-Ph, Polynomial (1986) Newton method for linear programming. Algorithnica 1:425–453. (Special issue)

Dikin I (1967) Iterative solution of problems of linear and quadratic programming. Sov Math Dokl 8:674–675

Dikin I (1974) On the speed of an iterative process. Upravlyaemye Sistemi 12:54–60

Dorfman R, Samuelson PA, Solow RM (1958) Linear programming and economic analysis. McGraw-Hill, New York

Duff IS, Erisman AM, Reid JK (1986) Direct methods for sparse matrices. Oxford University Press, Oxford

Evtushenko YG (1974) Two numerical methods for solving non-linear programming problems. Sov Math Dokl 15:420–423

Fang S-C (1993) Linear optimization and extensions: theory and algorithms. AT & T, Prentice-Hall, Englewood Cliffs

Farkas J (1902) Uber die Theorie der Einfachen Ungleichungen. Journal für die Reine und Angewandte Mathematik 124:1–27

Fiacco AV, Mccormick GP (1968) Nonlinear programming: sequential unconstrained minimization techniques. Wiley, New York

Fletcher R (1981) Practical methods of optimization. Volume 2: constrained optimization. Wiley, Chichester

Ford Jr LR, Fulkerson DR (1956) Maximal flow through a network. Can J Math 8:399–407

Forrest JJH, Goldfarb D (1992) Steepest edge simplex algorithm for linear programming. Math Program 57:341–374

Forrest J, Tomlin J (1972) Updating triangular factors of the basis to maintain sparsity in the product form simplex method. Math Program 2:263–278

Forsythe GE, Malcom MA, Moler CB (1977) Computer methods for mathematical computations. Pretice-Hall, EnglewoodCliffs

Fourer R (1979) Sparse Gaussian elimination of staircase linear systems. Tehchnical report ADA081856, Calif Systems Optimization LAB, Stanford University

Fourier JBJ (1823) Analyse des travaux de l’Academie Royale des Science, pendant l’Iannee, Partie mathematique, Histoire de l’Acanemie Royale des Sciences de l’nstitut de France 6 [1823] (1826), xxix–xli (partially reprinted as: Premier extrait, in Oeuvres de Fourier, Tome 11 (Darboux G, ed.), Gauthier-Villars, Paris, 1890, (reprinted: G. Olms, Hildesheim, 1970), pp 321–324

Frisch KR (1955) The logarithmic potentical method of convex programming. Memorandum, University Institute of Economics, Oslo

Fulkerson D, Wolfe P (1962) An algorithm for scaling matrices. SIAM Rev 4:142–146

Gale D, Kuhn HW, Tucker AW (1951) Linear programming and the theory of games. In: Koopmans T (ed) Activity analysis of production and allocation. Wiley, New York, pp 317–329

Gass SI (1985) Linear programming: methods and applications. McGraw-Hill, New York

Gass SI, Saaty T (1955) The computational algorithm for the parametric objective function. Nav Res Logist Q 2:39–45

Gay DM (1978) On combining the schemes of Reid and Saunders for sparse LP bases. In: Duff IS, Stewart GW (eds) Sparse matrix proceedings. SIAM, Philadelphia, pp 313–334

Gay D (1985) Electronic mail distribution of linear programming test problems. COAL Newsl 13:10–12

Gay DM (1987) A variant of Karmarkar’s linear programming algorithm for problems in standard form. Math Program 37:81–90

Geoffrion AM (1972) Generalized Benders decomposition. JOTA 10:137–154

George A, Liu W-H (1981) Computing solution of large sparse positive definite systems. Prentice-Hall, Engleewood Cliffs

Gill PE, Murray W (1973) A numerically stable form of the simplex algorithm. Linear Algebra Appl 7:99–138

Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1985) On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projected method. Technical report SOL 85–11, Department of Operations Research, Stanford University

Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1986) On projected Newton methods for linear programming and an equivalence to Karmarkar’s projected method. Math Program 36:183–209

Gill PE, Murray W, Saunders MA, Wright MH (1987) Maintaining LU factors of a general sparse matrix. Linear Algebra Appl 88/89:239–270

Gill PE, Murray W, Saunders MA, Wright MH (1989) A practical anti-cycling procedure for linearly constrainted optimization. Math Program 45:437–474

Goldfarb D (1977) On the Bartels-Golub decomposition for linear programming bases. Math Program 13:272–279

Goldfarb D, Reid JK (1977) A practicable steepest-edge simplex algorithm. Math Program 12:361–371

Goldman AJ, Tucker AW (1956a) Polyhedral convex cones. In: Kuhn HW, Tucker AW (eds) Linear inequalities and related systems. Annals of mathmatical studies, vol 38. Princeton University Press, Princeton, pp 19–39

Goldman AJ, Tucker AW (1956b) Theory of linear programming. In: Kuhn HW, Tucker AW (eds) Linear inequalities and related systems. Annals of mathmatical studies, vol 38. Princeton University Press, Princeton, pp 53–97

Golub GH (1965) Numerical methods for solving linear least squares problems. Numer Math 7:206–216

Golub GH, Van Loan CF (1989) Matrix computations, 2edn. The Johns Hopkins University Press, Baltimore

Gomory AW (1958) Outline of an algorithm for integer solutions to linear programs. Bull Am Math Soc 64:275–278

Gonzaga CC (1987) An algorithm for solving linear programming problems in O ( n 3 L ) operations. Technical Report UCB/ERL M87/10, Electronics Research Laboratory, University of California, Berkeley

Gonzaga CC (1990) Convergence of the large step primal affine-scaling algorithm for primal non-degenerate linear problems. Technical report, Department of Systems Engineering and Computer Sciences, COPPE-Federal University of Riode Janeiro

Gould N, Reid J (1989) New crash procedure for large systems of linear constraints. Math Program 45:475–501

Greenberg HJ (1978) Pivot selection tactics. In: Greenberg HJ (ed) Design and implementation of optimization software. Sijthoff and Noordhoff, Alphen aan den Rijn, pp 109–143

Greenberg HJ, Kalan J (1975) An exact update for Harris’ tread. Math Program Study 4:26–29

Guerrero-Garcia P, Santos-Palomo A (2005) Phase I cycling under the most-obtuse-angle pivot rule. Eur J Oper Res 167:20–27

Guerrero-Garcia P, Santos-Palomo A (2009) A deficient-basis dual counterpart of Pararrizos, Samaras ans Stephanides’ primal-dual simplex-type algorithm. Optim Methods Softw 24:187–204

Güler O, Ye Y (1993) Convergence behavior of interior-point algorithms. Math Program 60:215–228

Hadley G (1972) Linear programming. Addison-Wesley, Reading

Hager WW (2002) The dual active set algorithm and its application to linear programming. Comput Optim Appl 21:263–275

Hall LA, Vanderbei RJ (1993) Two-third is sharp for affine scaling. Oper Res Lett 13:197–201

Hamming RW (1971) Introduction to applied numerical analysis. McGraw-Hill, New York

Harris PMJ (1973) Pivot selection methods of the Devex LP code. Math Program 5:1–28

Hattersley B, Wilson J (1988) A dual approach to primal degeneracy. Math Program 42:135–145

He X-C, Sun W-Y (1991) An intorduction to generalized inverse matrix (in Chinese). Jiangsu Science and Technology Press, Nanjing

Hellerman E, Rarick DC (1971) Reinversion with the preassigned pivot procedure. Math Program 1:195–216

Hellerman E, Rarick DC (1972) The partitioned preassigned pivot procedure. In: Rose DJ, Willouhby RA (eds) Sparse matrices and their applications. Plenum, New York, pp 68–76

Hertog DD, Roos C (1991) A survey of search directions in interior point methods for linear programming. Math Program 52:481–509

Hoffman AJ (1953) Cycling in the simplex algorithm. Technical report 2974, National Bureau of Standards

Hu J-F (2007) A note on “an improved initial basis for the simplex algorithm”. Comput Oper Res 34:3397–3401

Hu J-F, Pan P-Q (2006) A second note on ‘A method to solve the feasible basis of LP’ (in Chinese). Oper Res Manag Sci 15:13–15

Hu J-F, Pan P-Q (2008a) Fresh views on some recent developments in the simplex algorithm. J Southeast Univ 24:124–126

Hu J-F, Pan P-Q (2008b) An efficient approach to updating simplex multipliers in the simplex algorithm. Math Program Ser A 114:235–248

Jansen B, Terlakey T, Roos C (1994) The theory of linear programming: skew symmetric self-dual problems and the central path. Optimization 29:225–233

Jansen B, Roos C, Terlaky T (1996) Target-following methods for linear programming. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer, Dordrecht

Jeroslow R (1973) The simplex algorithm with the pivot rule of maximizing criterion improvement. Discret Appl Math 4:367–377

Kalantari B (1990) Karmarkar’s algorithm with improved steps. Math Program 46:73–78

Kallio M, Porteus EL (1978) A class of methods for linear programming. Math Program 14:161–169

Kantorovich LV (1960) Mathematical methods in the organization and planning of production. Manag Sci 6:550–559. Original Russian version appeared in 1939

Karmarkar N (1984) A new polynomial time algorithm for linear programming. Combinatorica 4:373–395

Karmarkar N, Ramakrishnan K (1985) Further developments in the new polynomial-time algorithm for linear progrmming. In: Talk given at ORSA/TIMES national meeting, Boston, Apr 1985

Khachiyan L (1979) A polynomial algorithm in linear programming. Doklady Academiia Nauk SSSR 244:1093–1096

Kirillova FM, Gabasov R, Kostyukova OI (1979) A method of solving general linear programming problems. Doklady AN BSSR (in Russian) 23:197–200

Klee V (1965) A class of linear problemminng problems requiring a larger number of iterations. Numer Math 7:313–321

Klee V, Minty GJ (1972) How good is the simplex algorithm? In: Shisha O (ed) Inequalities-III. Academic, New York, pp 159–175

Koberstein A (2008) Progress in the dual simplex algorithm for solving large scale LP problems: techniques for a fast and stable implementation. Comput Optim Appl 41:185–204

Koberstein A, Suhl UH (2007) Progress in the dual simplex method for large scale LP problems: practical dual phase 1 algorithms. Comput Optim Appl 37:49–65

Kojima M, Mizuno S, Yoshise A (1989) A primal-dual interior point algorithm for linear programming. In: Megiddo N (ed) Progress in mathematical programming. Springer, New York, pp 29–47

Kojima M, Megiddo N, Mizuno S (1993) A primal-dual infeasible-interior-point algorithm for linear programming. Math Program 61:263–280

Kortanek KO, Shi M (1987) Convergence results and numerical experiments on a linear programming hybrid algorithm. Eur J Oper Res 32:47–61

Kostina E (2002) The long step rule in the bounded-variable dual simplex method: numerical experiments. Math Methods Oper Res 55:413–429

Kotiah TCT, Steinberg DI (1978) On the possibility of cycling with the simplex method. Oper Res 26:374–376

Kuhn HW, Quandt RE (1953) An experimental study of the simplex method. In: Metropolis NC et al (eds) Eperimental arithmetic, high-speed computing and mathematics. Proceedings of symposia in applied mathematics XV. American Mathematical Society, Providence, pp 107–124

Land AH, Doig AG (1960) An automatic method of solving discrete programming problems. Econometrica 28:497–520

Leichner SA, Dantzig GB, Davis JW (1993) A strictly, improving linear programming phase I algorithm. Ann Oper Res 47:409–430

Lemke CE (1954) The dual method of solving the linear programming problem. Nav Res Logist Q 1:36–47

Li W (2004) A note on two direct methods in linear programming. Eur J Oper Res 158:262–265

Li C, Pan P-Q, Li W (2002) A revised simplex algorithm based on partial pricing pivotrule (in Chinese). J Wenzhou Univ 15:53–55

Li W, Guerrero-Garcia P, Santos-Palomo A (2006a) A basis-deficiency-allowing primal phase-1 algorithm using the most-obtuse-angle column rule. Comput Math Appl 51:903–914

Li W, Pan P-Q, Chen G (2006b) A combined projected gradient algorithm for linear programming. Optim Methods Softw 21:541–550

Llewellyn RW (1964) Linear programming. Holt, Rinehart and Winston, New York

Luo Z-Q, Wu S (1994) A modified predictor-corrector method for linear programming. Comput Optim Appl 3:83–91

Lustig IJ (1990) Feasibility issues in a prinal-dual interior-point method for linear programming. Math Program 49:145–162

Lustig IJ, Marsten RE, Shanno DF (1991) Computational exeperience with a primal-dual interior point method for linear programming. Linear Algebra Appl 152:191–222

Lustig IJ, Marsten RE, Shanno DF (1992) On implementing Mehrotras’s predictor-corrector interior-point for linear programming. SIAM J Optim 2:435–449

Lustig IJ, Marsten R, Shanno D (1994) Interior point methods for linear programming: computational state of the art. ORSA J Comput 6:1–14

Markowitz HM (1957) The elimination form of the inverse and its application to linear programming. Manag Sci 3:255–269

Maros I (1986) A general phase-1 method in linear programming. Eur J Oper Res 23:64–77

Maros I (2003a) A generalied dual phase-2 simplex algorithm. Eur J Oper Res 149:1–16

Maros I (2003b) Computational techniques of the simplex method. International series in operations research and management, vol 61. Kluwer, Boston

Maros I, Khaliq M (2002) Advances in design and implementation of optimization software. Eur J Oper Res 140:322–337

Marshall KT, Suurballe JW (1969) A note on cycling in the simplex method. Nav Res Logist Q 16:121–137

Martin RK (1999) Large scale linear and integer optimization: a unified approach. Kluwer, Boston

Mascarenhas WF (1997) The affine scaling algorithm fails for λ = 0. 999. SIAM J Optim 7:34–46

McShane KA, Monma CL, Shanno DF (1989) An implementation of a primal-dual method for linear programming. ORSA J Comput 1:70–83

Megiddo N (1986a) Introduction: new approaches to linear programming. Algorithmca 1:387–394. (Special issue)

Megiddo N (1986b) A note on degenerach in linear programming. Math Program 35:365–367

Megiddo N (1989) Pathways to the optimal set in linear programming. In: Megiddo N (ed) Progress in mathematical programming. Springer, New York, pp 131–158

Megiddo N, Shub M (1989) Boundary behavior of interior point algorithm in linear programming. Math Oper Res 14:97–146

Mehrotra S (1991) On finding a vertex solution using interior point methods. Linear Algebra Appl 152:233–253

Mehrotra S (1992) On the implementation of a primal-dual interior point method. SIAM J Optim 2:575–601

Mizuno S, Todd MJ, Ye Y (1993) On adaptive-step primal-dual interior-point algorithms for linear programming. Math Oper Res 18:964–981

Monteiro RDC, Adler I (1989) Interior path following primal-dual algorithms: Part I: linear programming. Math Program 44:27–41

Murtagh BA (1981) Advances in linear programming: computation and practice. McGraw-Hill, New York/London

Murtagh BA, Saunders MA (1978) Large-scale linearly constrained optimization. Math Program 14:41–72

Murtagh BA, Saunders MA (1998) MINOS 5.5 user’s guid. Technical report SOL 83-20R, Department of Engineering Economics Systems & Operations Research, Stanford University, Stanford

Murty KG (1983) Linear programming. Wiley, New York

Nazareth JL (1987) Computer solutions of linear programs. Oxford University Press, Oxford

Nazareth JL (1996) The implementation of linear programming algorithms based on homotopies. Algorithmica 15:332–350

Nemhauser GL (1994) The age of optimizaiton: solving large-scale real-world problems. Oper Res 42:5–13

Nemhauser GL, Wolsey LA (1999) Integer and combinatorial optimization. Wiley, New York

Nocedal J, Wright SJ (1999) Numerical optimization. Springer, Berlin

Ogryczak W (1988) The simplex method is not always well behaved. Linear Algebra Appl 109:41–57

Orchard-Hays W (1954) Background development and extensions of the revised simplex method. Report RM 1433, The Rand Corporation, Santa Monica

Orchard-Hays W (1956) Evolution of computer codes for linear programming. Paper P-810, The RAND Corporation, p 2224

Orchard-Hays W (1971) Advanced linear programming computing techniques. McGraw-Hill, New York

Padberg MW (1995) Linear optimization and extensions. Springer, Berlin

Pan P-Q (1982) Differential equaltion methods for unconstarained optimization (in Chinese). Numer Math J Chin Univ 4:338–349

Pan P-Q (1990) Practical finite pivoting rules for the simplex method. OR Spektrum 12:219–225

Pan P-Q (1991) Simplex-like method with bisection for linear programming. Optimization 22:717–743

Pan P-Q (1992a) New ODE methods for equality constrained optimization (I) – equations. J Comput Math 10:77–92

Pan P-Q (1992b) New ODE methods for equality constrained optimization (II) – algorithms. J Comput Math 10:129–146

Pan P-Q (1992c) Modification of Bland’s pivoting rule (in Chinese). Numer Math 14:379–381

Pan P-Q (1994a) A variant of the dual pivot rule in linear programming. J Inf Optim Sci 15:405–413

Pan P-Q (1994b) Composite phase-1 methods without measuring infeasibility. In: Yue M-Y (ed) Theory of optimization and its applications. Xidian University Press, Xian, pp 359–364.

Pan P-Q (1994c) Ratio-test-free pivoting rules for the bisection simplex method. In: Proceedings of national conference on decision making science, Shangrao, pp 24–29

Pan P-Q (1994d) Ratio-test-free pivoting rules for a dual phase-1 method. In: Xiao S-T, Wu F (eds) Proceeding of the third conference of Chinese SIAM. Tsinghua University press, Beijing, pp 245–249

Pan P-Q (1995) New non-monotone procedures for achieving dual feasibility. J Nanjing Univ Math Biquarterly 12:155–162

Pan P-Q (1996a) A modified bisection simplex method for linear programming. J Comput Math 14:249–255

Pan P-Q (1996b) New pivot rules for achieving dual feasibility. In: Wei Z (ed) Theory and applications of OR. Proceedings of the fifth conference of Chinese OR society, Xian, 10–14 Oct 1996. Xidian University Press, Xian, pp 109–113

Pan P-Q (1996c) Solving linear programming problems via appending an elastic constraint. J Southeast Univ (English edn) 12:253–265

Pan P-Q (1997) The most-obtuse-angle row pivot rule for achieving dual feasibility in linear programming: a computational study. Eur J Oper Res 101:164–176

Pan P-Q (1998a) A dual projective simplex method for linear programming. Comput Math Appl 35:119–135

Pan P-Q (1998b) A basis-deficiency-allowing variation of the simplex method. Comput Math Appl 36:33–53

Pan P-Q (1999a) A new perturbation simplex algorithm for linear programming. J Comput Math 17:233–242

Pan P-Q (1999b) A projective simplex method for linear programming. Linear Algebra Appl 292:99–125

Pan P-Q (2000a) A projective simplex algorithm using LU decomposition. Comput Math Appl 39:187–208

Pan P-Q (2000b) Primal perturbation simplex algorithms for linear programming. J Comput Math 18:587–596

Pan P-Q (2000c) On developments of pivot algorithms for linear programming. In: Proceedings of the sixth national conference of operations research society of China, Changsha, 10–15 Oct 2000. Global-Link Publishing, Hong Kong, pp 120–129

Pan P-Q (2004) A dual projective pivot algorithm for linear programming. Comput Optim Appl 29:333–344

Pan P-Q (2005) A revised dual projective pivot algorithm for linear programming. SIAM J Optim 16:49–68

Pan P-Q (2008a) A largest-distance pivot rule for the simplex algorithm. Eur J Oper Res 187:393–402

Pan P-Q (2008b) A primal deficient-basis algorithm for linear programming. Appl Math Comput 198:898–912

Pan P-Q (2008c) Efficient nested pricing in the simplex algorithm. Oper Res Lett 38:309–313

Pan P-Q (2010) A fast simplex algorithm for linear programming. J Comput Math 28(6):837–847

Pan P-Q (2013) An affine-scaling pivot algorithm for linear programming. Optimization 62: 431–445

Pan P-Q, Li W (2003) A non-monotone phase-1 method in linear programming. J Southeast Univ (English edn) 19:293–296

Pan P-Q, Ouiang Z-X (1993) Two variants of the simplex algorithm (in Chinese). J Math Res Expo 13(2):274–275

Pan P-Q, Ouyang Z-X (1994) Moore-Penrose inverse simplex algorithms based on successive linear subprogramming approach. Numer Math 3:180–190

Pan P-Q, Pan Y-P (2001) A phase-1 approach to the generalized simplex algorithm. Comput Math Appl 42:1455–1464

Pan P-Q, Li W, Wang Y (2004) A phase-1 algorithm using the most-obtuse-angle rule for the basis-deficiency-allowing dual simplex method. OR Trans 8:88–96

Pan P-Q, Li W, Cao J (2006a) Partial pricing rule simplex method with deficient basis. Numer Math J Chin Univ (English series) 15:23–30

Pan P-Q, Hu J-F, Li C (2006b) Feasible region contraction interior point algorithm. Appl Math Comput 182:1361–1368

Papadimitriou CH, Steiglitz K (1982) Combinatorial optimization: algorithms and complexity. Prentice-Hall, New Jersey

Perold AF (1980) A degeneracy exploiting LU factorization for the simplex method. Math Program 19:239–254

Powell MJD (1989) A tolerant algorithm for linearly constrained optimization calculations. Math Program 45:547–566

Reid JK (1982) A sparsity-exploiting variant of the Bartels-Golub decomposition for linear programming bases. Math Program 24:55–69

Rockafellar RT (1997) Convext analysis. Princeton University Press, Princeton

Roos C (1990) An exponential example for Terlaky’s pivoting rule for the criss-cross simplex method. Math Program 46:79–84

Roos C, Vial J-Ph (1992) A polynomial method of approximate centers for linear programming. Math Program 54:295–305

Roos C, Terlaky T, Vial J-P (1997) Theory and algorithms for linear programming. Wiley, Chichester

Rothenberg RI (1979) Linear programming. North-Holland, New York

Ryan D, Osborne M (1988) On the solution of highly degenerate linear problems. Math Program 41:385–392

Saigal R (1995) Linear programming. Kluwer, Boston

Santos-Palomo A (2004) The sagitta method for solving linear problems. Eur J Oper Res 157:527–539

Saunders MA (1972) Large scale linear programming using the Cholesky factorization. Technical report STAN-CS-72-152, Stanford University

Saunders MA (1973) The complexity of LU updating in the simplex method. In: Andersen R, Brent R (eds) The complexity of computational problem solving. University Press, St. Lucia, pp 214–230

Saunders MA (1976) A fast and stable implementation of the simplex method using Bartels-Golub updating. In: Bunch J, Rose D (eds) Sparse matrix computation. Academic, New York, pp 213–226

Schrijver A (1986) Theory of linear and integer programming. Wiley, Chichester

Shanno DF, Bagchi A (1990) A unified view of interior point methods for linear programming. Ann Oper Res 22:55–70

Shen Y, Pan P-Q (2006) Dual besiction simplex algorithm (in Chinese). In: Preceedings of the national conference of operatios research society of China, Shenzhen. Globa-Link Informatics, Hong Kong, pp 168–174

Shi Y, Pan P-Q (2011) Higher order iteration schemes for unconstrained optimization. Am J Oper Res 1:73–83

Smale S (1983a) On the average number of steps of the simplex method of linear programming. Math Program 27:241–262

Smale S (1983b) The problem of the average speed of the simplex method. In: Bachem A, Grotschel M, Korte B, (eds) Mathematical programming, the state of the art. Springer, Berlin, pp 530–539

Srinath LS (1982) Linear programming: principles and applications. Affiliated East-West Press, New Delhi

Suhl UH (1994) Mathematical optimization system. Eur J Oper Res 72:312–322

Suhl LM, Suhl UH (1990) Computing sparse LU factorization for large-scale linear programming bases. ORSA J Comput 2:325–335

Suhl LM, Suhl UH (1993) A fast LU-update for linear programming. Ann Oper Res 43:33–47

Sun W, Yuan Y-X (2006) Optimization theory an methods: nonlinear programming. Springer, New York

Swietanowaki A (1998) A new steepest edge approximation for the simplex method for linear programming. Comput Optim Appl 10:271–281

Taha H (1975) Integer programming: theory, applications, and computations. Academic, Orlando

Talacko J V, Rockefeller RT (1960) A Compact Simplex Algorithm and a Symmetric Algorithm for General Linear Programs. Unpublished paper, Marquette University

Tanabe K (1977) A geometric method in non-linear programming. Technical Report 23343-AMD780, Brookhaven National Laboratory, New York

Tanabe K (1990) Centered Newton method for linear prgramming: interior and ‘exterior’ point method, (in Japannese). In: Tone K (ed) New methods for linear programming 3. The Institute of Statistical Mathematics, Tokeo, pp 98–100

Tapia RA, Zhang Y (1991) An optimal-basis identification technique for interior-point linear programming algorithms. Linear Algebra Appl 152:343–363

Terlaky T (1985) A covergent criss-cross method. Math Oper Stat Ser Optim 16:683–690

Terlaky T (1993) Pivot rules for linear programming: a survey on recent theoretical developments. Ann Oper Res 46:203–233

Terlaky T (ed) (1996) Interior point methods of mathematical programming. Kluwer, Dordrecht

Todd MJ (1982) An implementation of the simplex method for linear programming problems with variable upper bounds. Math Program 23:23–49

Todd MJ (1983) Large scale linear programming: geometry, working bases and factorizations. Math Program 26:1–23

Tomlin JA (1972) Modifying trangular factors of the basis in the simplex method. In: Rose DJ, Willoughby RA (eds) Sparse matrices and applications. Plenum, New York

Tomlin JA (1974) On pricing and backward transformation in linear programming. Math Program 6:42–47

Tomlin JA (1975) On scaling linear programming problems. Math Program Study 4:146–166

Tomlin JA (1987) An experimental approach to Karmarkar’s projective method, for linear programming. Math Program 31:175–191

Tsuchiya T (1992) Global convergence property of the affine scaling method for primal degenerate linear programming problems. Math Oper Res 17:527–557

Tsuchiya T, Muramatsu M (1995) Global convergence of a long-step affine-scaling algorithm for degenrate linear programming problems. SIAM J Optim 5:525–551

Tucker AW (1956) Dual systems of homegeneous linear relations. In: Kuhn HW, Tucker AW, Dantzig GB (eds) Linear inequalities and related systems. Princeton University Press, Princeton, pp 3–18

Turner K (1991) Computing projections for the Karmarkar algorithtm. Linear Algebra Appl 152:141–154

Vanderbei RJ, Lagarias JC (1990) I.I. Dikin’s convergence result for the affine-scaling algorithm. Contemp Math 114:109–119

Vanderbei RJ, Meketon M, Freedman B (1986) A modification of Karmarkars linear programming algorithm. Algorithmica 1:395–407

Vemuganti RR (2004) On gradient simplex methods for linear programs. J Appl Math Decis Sci 8:107–129

Wang Z (1987) A conformal elimination-free algorithm for oriented matroid programming. Chin Ann Math 8(B1):16–25

Wilkinson JH (1971) Moden error analysis. SIAM Rev 13:548–568

Wolfe P (1963) A technique for resolving degeneracy in linear programming. J Oper Res Soc 11:205–211

Wolfe P (1965) The composite simplex algorithm. SIAM Rev 7:42–54

Wolsey L (1998) Integer programming. Wiley, New York

Wright SJ (1997) Primal-dual interior-point methods. SIAM, Philadelphia

Xu X, Ye Y (1995) A generalized homogeneous and self-dual algorithm for linear prgramming. Oper Res Lett 17:181–190

Xu X, Hung P-F, Ye Y (1996) A simplified homogeneous and self-dual linear programming algorithm and its implementation. Ann Oper Res 62:151–171

Yan W-L, Pan P-Q (2001) Improvement of the subproblem in the bisection simplex algorithm (in Chinese). J Southeast Univ 31:324–241

Yan A, Pan P-Q (2005) Variation of the conventional pivot rule and the application in the deficient basis algorithm (in Chinese). Oper Res Manag Sci 14:28–33

Yan H-Y, Pan P-Q (2009) Most-obtuse-angle criss-cross algorithm for linear programming (in Chinese). Numer Math J Chin Univ 31:209–215

Yang X-Y, Pan P-Q (2006) Most-obtuse-angle dual relaxation algorithm (in Chinese). In: Preceedings of the national conference of operatios research society of China, Shenzhen. Globa-Link Informatics, Hong Kong, pp 150–155

Ye Y (1987) Eliminating columns in the simplex method for linear programming. Technical report SOL 87–14, Department of Operations Research, Stanford Univesity, Stanford

Ye Y (1990) A ‘build-down’ scheme for linear probleming. Math Program 46:61–72

Ye Y (1997) Interior point algorithms: theory and analysis. Wiley, New York

Ye Y, Todd MJ, Mizuno S (1994) An \(o(\sqrt{n}l)\) -iteration homogeneous and selfdual linear programming algorithm. Math Oper Res 19:53–67

Zhang H, Pan P-Q (2008) An interior-point algorithm for linear programming (in Chinese). In: Preceedings of the national conference of operatios research society of China, Nanjing. Globa-Link Informatics, Hong Kong, pp 183–187

Zhang J-Z, Xu S-J (1997) Linear programming (in Chinese). Science Press, Bejing

Zhang L-H, Yang W-H, Liao L-Z (2013) On an efficient implementation of the face algorithm for linear programming. J Comp Math 31:335–354

Zhou Z-J, Pan P-Q, Chen S-F (2009) Most-obtuse-angle relaxation algorithm (in Chinese). Oper Res Manag Sci 18:7–10

Zionts S (1969) The criss-cross method for solving linear programming problems. Manag Sci 15:420–445

Zlatev Z (1980) On some pivotal strategies in Gaussian elimination by sparse technique. SIAM J Numer Anal 17:12–30

Zörnig P (2006) Systematic construction of examples for cycling in the simplex method. Comput Oper Res 33:2247–2262

Zoutendijk G (1960) Methods of feasible directions. Elsevier, Amsterdam

Download references

## Author information

Authors and affiliations.

Department of Mathematics, Southeast University, Nanjing, China

Ping-Qi PAN

You can also search for this author in PubMed Google Scholar

## Rights and permissions

Reprints and permissions

## Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

## About this chapter

Cite this chapter.

PAN, PQ. (2014). Duality Principle and Dual Simplex Method. In: Linear Programming Computation. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40754-3_4

## Download citation

DOI : https://doi.org/10.1007/978-3-642-40754-3_4

Published : 10 October 2013

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-642-40753-6

Online ISBN : 978-3-642-40754-3

eBook Packages : Mathematics and Statistics Mathematics and Statistics (R0)

## Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

- Publish with us

Policies and ethics

- Find a journal
- Track your research

- school Campus Bookshelves
- menu_book Bookshelves
- perm_media Learning Objects
- login Login
- how_to_reg Request Instructor Account
- hub Instructor Commons
- Download Page (PDF)
- Download Full Book (PDF)
- Periodic Table
- Physics Constants
- Scientific Calculator
- Reference & Cite
- Tools expand_more
- Readability

selected template will load here

This action is not available.

## 4.3: Minimization By The Simplex Method

- Last updated
- Save as PDF
- Page ID 38581

- Rupinder Sekhon and Roberta Bloom
- De Anza College

## Learning Objectives

In this section, you will learn to solve linear programming minimization problems using the simplex method.

- Identify and set up a linear program in standard minimization form
- Formulate a dual problem in standard maximization form
- Use the simplex method to solve the dual maximization problem
- Identify the optimal solution to the original minimization problem from the optimal simplex tableau.

In this section, we will solve the standard linear programming minimization problems using the simplex method. Once again, we remind the reader that in the standard minimization problems all constraints are of the form \(ax + by ≥ c\).

The procedure to solve these problems was developed by Dr. John Von Neuman. It involves solving an associated problem called the dual problem . To every minimization problem there corresponds a dual problem. The solution of the dual problem is used to find the solution of the original problem. The dual problem is a maximization problem, which we learned to solve in the last section. We first solve the dual problem by the simplex method.

From the final simplex tableau, we then extract the solution to the original minimization problem.

Before we go any further, however, we first learn to convert a minimization problem into its corresponding maximization problem called its dual .

## Example \(\PageIndex{1}\)

Convert the following minimization problem into its dual.

\[\begin{array}{ll} \textbf { Minimize } & \mathrm{Z}=12 \mathrm{x}_{1}+16 \mathrm{x}_{2} \\ \textbf { Subject to: } & \mathrm{x}_{1}+2 \mathrm{x}_{2} \geq 40 \\ & \mathrm{x}_{1}+\mathrm{x}_2 \geq 30 \\ & \mathrm{x}_{1} \geq 0 ; \mathrm{x}_{2} \geq 0 \end{array} \nonumber \]

To achieve our goal, we first express our problem as the following matrix.

\[\begin{array}{cc|c} 1 & 2 & 40 \\ 1 & 1 & 30 \\ \hline 12 & 16 & 0 \end{array} \nonumber \]

Observe that this table looks like an initial simplex tableau without the slack variables. Next, we write a matrix whose columns are the rows of this matrix, and the rows are the columns. Such a matrix is called a transpose of the original matrix. We get:

\[\begin{array}{cc|c} 1 & 1 & 12 \\ 2 & 1 & 16 \\ \hline 40 & 30 & 0 \end{array} \nonumber \]

The following maximization problem associated with the above matrix is called its dual.

\[\begin{array}{ll} \textbf { Maximize } & \mathrm{Z}=40 \mathrm{y}_{1}+30 \mathrm{y}_{2} \\ \textbf { Subject to: } & \mathrm{y}_{1}+\mathrm{y}_{2} \leq 12 \\ & 2 \mathrm{y}_1+\mathrm{y}_2 \leq 16 \\ & \mathrm{y}_{1} \geq 0 ; \mathrm{y}_{2} \geq 0 \end{array} \nonumber \]

Note that we have chosen the variables as y's, instead of x's, to distinguish the two problems.

## Example \(\PageIndex{2}\)

Solve graphically both the minimization problem and its dual maximization problem.

Our minimization problem is as follows.

\[\begin{array}{ll} \textbf { Minimize } & \mathrm{Z}=12 \mathrm{x}_1+16 \mathrm{x}_2 \\ \textbf { Subject to: } & \mathrm{x}_{1}+2 \mathrm{x}_{2} \geq 40 \\ & \mathrm{x}_{1}+\mathrm{x}_{2} \geq 30 \\ & \mathrm{x}_{1} \geq 0 ; \mathrm{x}_{2} \geq 0 \end{array} \nonumber \]

We now graph the inequalities:

We have plotted the graph, shaded the feasibility region, and labeled the corner points. The corner point (20, 10) gives the lowest value for the objective function and that value is 400.

Now its dual is:

\[\begin{array}{ll} \textbf { Maximize } & \mathrm{Z}=40 \mathrm{y}_1+30 \mathrm{y}_{2} \\ \textbf { Subject to: } & \mathrm{y}_{1}+\mathrm{y}_{2} \leq 12 \\ & 2 \mathrm{y}_1+\mathrm{y} 2 \leq 16 \\ & \mathrm{y}_{1} \geq 0 ; \mathrm{y}_{2} \geq 0 \end{array}\nonumber \]

We graph the inequalities:

Again, we have plotted the graph, shaded the feasibility region, and labeled the corner points. The corner point (4, 8) gives the highest value for the objective function, with a value of 400.

The reader may recognize that Example \(\PageIndex{2}\) above is the same as Example 3.1.1, in section 3.1. It is also the same problem as Example 4.1.1 in section 4.1, where we solved it by the simplex method.

We observe that the minimum value of the minimization problem is the same as the maximum value of the maximization problem; in Example \(\PageIndex{2}\) the minimum and maximum are both 400. This is not a coincident. We state the duality principle.

## The Duality Principle

The objective function of the minimization problem reaches its minimum if and only if the objective function of its dual reaches its maximum. And when they do, they are equal.

Our next goal is to extract the solution for our minimization problem from the corresponding dual. To do this, we solve the dual by the simplex method.

## Example \(\PageIndex{3}\)

Find the solution to the minimization problem in Example \(\PageIndex{1}\) by solving its dual using the simplex method. We rewrite our problem.

\[\begin{array}{ll} \textbf { Minimize } & \mathrm{Z}=12 \mathrm{x}_{1}+16 \mathrm{x}_{2} \\ \textbf { Subject to: } & \mathrm{x}_{1}+2 \mathrm{x}_{2} \geq 40 \\ & \mathrm{x}_{1}+\mathrm{x}_{2} \geq 30 \\ & \mathrm{x}_{1} \geq 0 ; \mathrm{x}_{2} \geq 0 \end{array} \nonumber \]

\[\begin{array}{ll} \textbf { Maximize } & \mathrm{Z}=40 \mathrm{y}_{1}+30 \mathrm{y}_{2} \\ \textbf { Subject to: } & \mathrm{y}_{1}+\mathrm{y}_{2} \leq 12 \\ & 2 \mathrm{y}_{1}+\mathrm{y}_{2} \leq 16 \\ & \mathrm{y}_{1} \geq 0 ; \mathrm{y}_{2} \geq 0 \end{array} \nonumber \]

Recall that we solved the above problem by the simplex method in Example 4.1.1, section 4.1. Therefore, we only show the initial and final simplex tableau.

The initial simplex tableau is

\[\begin{array}{ccccc|c} \mathrm{y}_1 & \mathrm{y}_2 & \mathrm{x}_{1} & \mathrm{x}_{2} & \mathrm{Z} & \mathrm{C} \\ 1 & 1 & 1 & 0 & 0 & 12 \\ 2 & 1 & 0 & 1 & 0 & 16 \\ \hline-40 & -30 & 0 & 0 & 1 & 0 \end{array}\nonumber \]

Observe an important change. Here our main variables are \(\mathrm{y}_1\) and \(\mathrm{y}_2\) and the slack variables are \(\mathrm{x}_1 and \mathrm{x}_2\).

The final simplex tableau reads as follows:

\[\begin{array}{ccccc|c} \mathrm{y}_1 & \mathrm{y}_2 & \mathrm{x}_{1} & \mathrm{x}_{2} & \mathrm{Z} & \\ 0 & 1 & 2 & -1 & 0 & 8 \\ 1 & 0 & -1 & 1 & 0 & 4 \\ \hline 0 & 0 & 20 & 10 & 1 & 400 \end{array} \nonumber \]

A closer look at this table reveals that the \(\mathrm{x}_1\) and \(\mathrm{x}_2\) values along with the minimum value for the minimization problem can be obtained from the last row of the final tableau. We have highlighted these values by the arrows.

\[\begin{array}{ccccc|c} \mathrm{y}_1 & \mathrm{y}_2 & \mathrm{x}_{1} & \mathrm{x}_{2} & \mathrm{Z} & \\ 0 & 1 & 2 & -1 & 0 & 8 \\ 1 & 0 & -1 & 1 & 0 & 4 \\ \hline 0 & 0 & 20 & 10 & 1 & 400 \\ & & \uparrow & \uparrow & & \uparrow \end{array} \nonumber \]

We restate the solution as follows:

The minimization problem has a minimum value of 400 at the corner point (20, 10)

We now summarize our discussion.

## Minimization by the Simplex Method

- Set up the problem.
- Write a matrix whose rows represent each constraint with the objective function as its bottom row.
- Write the transpose of this matrix by interchanging the rows and columns.
- Now write the dual problem associated with the transpose.
- Solve the dual problem by the simplex method learned in section 4.1.
- The optimal solution is found in the bottom row of the final matrix in the columns corresponding to the slack variables, and the minimum value of the objective function is the same as the maximum value of the dual.

## IMAGES

## VIDEO

## COMMENTS

y1 + y2 2y3 9 y1; y2; y3 0: does have feasible origin. Hence we may simply solve the dual and then read the optimal primal solution o the nal table for the dual. In this section, we shall discuss a way of solving the dual without actually saying so.

Dual Simplex Suppose we have the Pinocchio LP: maximize 3x1 + 2x2 s.t. 2x1 + x2 ≤ 100 x1 + x2 ≤ 80 x1 ≤ 40 Let's take the dual (and introduce big M): minimize 100y1 + 80y2 + 40y3 + a1 + a2 2y1 + y2 + y3 − e1 + a1 ≥ 3 y1 + y2 − e2 + a2 ≥ 2

(a) Basic solution Given a system 'm' simultaneous linear equations in 'n' unknowns (m<n). Ax = b, Where A is a m× n matrix of rank m. Let B be any m× m submatrix formed by m linearly in dependent columns of A.

Lecture 11: The Dual Simplex Algorithm (Math Dept, University of Washington)Math 407A: Linear Optimization 4 / 16. The Dual Simplex Algorithm The tableau below is said to be dual feasible because the objective row coe cients are all non-positive, but it is not primal feasible.-1 -1 2 1 0 0 -3

An example of the primal{dual simplex method Suppose we are given the problem P: Minimize z= x 1 + 3x 2 + 3x 3 + x 4 subject to 8 >> >< >> >: 3 x 1 +4 2 3 + 4 = 2; 3x 1 2x 2 +6x 3 x 4 = 1; 6 x 1 +4 2 + 4 = 4 x 1; x 2 ... we would use the revised simplex to solve it. But here we will write down all the tableaus. So, the initial tableau is x 1 x ...

1. sorok • Choose the dual variables as follows: T w c = T B B −1, T v = [ 0 } basic c | B T B −1N {z − T c N } ] nonbasic (P) holds since x is feasible (CS) holds identically since v T T x = 0x B + (c B

Dual Simplex Example 1 An Example of the Dual Simplex Method John Mitchell In this handout, we give an example demonstrating that the dual simplex method is equivalent to applying the simplex method to the dual problem. We have a tableau in the form where c x s M = 0 but b has some negative components. We want to nd the optimal solution.

The dual simplex works better in practice. { It is usually easier to nd initial dual feasible solutions. Since in practise we usually have c 0, then y= 0 is a dual feasible solution. { The dual LP is often less degenerate. "Warm start"s: to solve another related LP after solving the rst one. { If the objective function cchanges, we use primal ...

basis is then called dual feasible. Let c be the reduced costs in the corresponding primal tableau, i.e., c T:= c T Tc B B 1A. It is easy to verify that ( ˇ ; T; T) is feasible if and only if c j 0 for all j2Land c j 0 for all j2U. Observe that these are the optimality condition of the simplex method on the primal (P).

The Dual Simplex Method (Revised Version) Again we are only considering Phase II of the Dual Simplex Method. So the assumption is that we begin with a basis where the basic solution of the dual problem is feasible. This fact will continue to be true in all subsequent pivots.

The algorithm as explained so far is known as primal simplex: starting with feasible basis, ﬁnd optimal basis (= satisfying optimality conds.) while keeping feasibility There is an alternative algorithm known as dual simplex: starting with optimal basis (= satisfying optimality conds.), ﬁnd feasible basis while keeping optimality

dual of the dual is the primal.'' This is an important result since it implies that the dual may be solved instead of the primal whenever there are computational advantages. Let us further emphasize the implications of solving these problems by the simplex method. The opti-mality conditions of the simplex method require that the reduced ...

If the value of this slack variable is negative, one or more Dual Simplex pivots will be needed. 0 to 2. The slack variable s3 for the original version of the constraint was 4 in the basic solution, and x2 was 1; the new slack variable s0 3 = s3 2x2 will be 4 2 1 = 2. This is still feasible, so no pivoting is necessary.

Dual-Based Phase I Method Example: Notes: Two objective functions: the true objective (on top), and a fake one (below it). For Phase I, use the fake objective|it's dual feasible. Two right-hand sides: the real one (on the left) and a fake (on the right). Ignore the fake right-hand side|we'll use it in another algorithm later. Phase I|First ...

+(a 1;ny 1 a m;ny m) x n y 1b 1 + y mb m So we get that a certain linear function of the x i is always at most a certain value, for every feasible (x 1;:::;x n).The trick is now to choose the y i so that the linear function of the x i for which we get an upper bound is, in turn, an upper bound to the cost function of (x

101 Springer-Verlag Berlin Heidelberg 2014 constructed with the same data .A; b; c/, is the "dual problem" of (4.1). There is 1-1 correspondence between variables of one of them and constraints of another. Values of y; z, satisfying AT y C z D c, are called dual solution. Set Rm D D f.y; z/ 2 Rn j AT y C z D c; z 0g

The original linear programming problem is known as primal problem, and the derived problem is known as dual problem. To start the dual Simplex method, the following three conditions are to be met: The objective function must satisfy the optimality conditions of the regular Simplex method. All the constraints must be of the type .

Update XB (using y) and DN (using αN) Note: dBi is the entering variable and dj is the leaving variable. (Expressed in terms of the primal: xBi is the leaving variable and xj is the entering variable) Simplex Algorithms. Input: A primal feasible basis B and vectors. XB=AB-1b & DN=cN - ANTAB-TcB. } Step 1: (Pricing) If DN≥ 0, stop, B is ...

Our next goal is to extract the solution for our minimization problem from the corresponding dual. To do this, we solve the dual by the simplex method. Example 4.3.3 4.3. 3. Find the solution to the minimization problem in Example 4.3.1 4.3. 1 by solving its dual using the simplex method. We rewrite our problem.

Largest-coe cient rule can take 2n 1 pivots to solve a problem in nvariables and constraints. For n= 70, 2n = 1:2 1021: At 1000 iterations per second, this problem will take 40 billion years to solve. The age of the universe is estimated at 15 billion years. Yet, problems with 10,000 to 100,000 variables are solved routinely every day.

Computational Procedure of Dual Simplex Method Any LPP for which it is possible to find infeasible but better than optimal initial basic solution can be solved by using dual simplex method. Such a situation can be recognized by first expressing the constraints in '≤' form and the objective function in the maximization form. After

recent report by the author (Ref. 6), an algorithm was described for solving this problem by applying the usual simplex method on its dual maximum problem. The objective of the present report is to give a simpler direct method of solving the minimization problem by making use of the dual simplex method of C. E. Lemke (Ref. 7).

with the mechanics of implementing the dual simplex method in the tableau format. We will see that the dual simplex algorithm is very similar to the primal simplex algorithm. Algorithm With reference to the tableau, the algorithm must begin with a basic solution that is dual feasible so all the elements of row 0 must be nonnnegative.