Templates for solving linear systems
See the second graph above. The Substitution Method. First, solve one linear equation for y in terms of x. Then substitute that expression for y in the other linear equation. You'll get an equation in x. Solve this, and you have the x -coordinate of the intersection. Then plug in x to either equation to find the corresponding y -coordinate. If it's easier, you can start by solving an equation for x in terms of y , also — same difference! Example Solve the second equation for y.
Add or subtract a multiple of one equation to or from the other equation, in such a way that either the x -terms or the y -terms cancel out. What does a system of linear equations, with no solution, look like? To use the substitution method, what must the format of at least one equation be? How would you check your solution after solving using the substitution method?
When checking your work, what method would you use to make sure your solution is correct? What is " -1,-3 "? Show your work to your teacher. What is the goal when using the Elimination Method? What is "to eliminate one variable in each linear equation"?
Best solver to try for symmetric indefinite systems when memory is limited. You can generally use gmres for almost all square, nonsymmetric problems. There are some cases where the biconjugate gradients algorithms bicg , bicgstab , cgs , and so on are more efficient than gmres , but their unpredictable convergence behavior often makes gmres a better initial choice. The convergence rate of iterative methods is dependent on the spectrum eigenvalues of the coefficient matrix.
Therefore, you can improve the convergence and stability of most iterative methods by transforming the linear system to have a more favorable spectrum clustered eigenvalues or a condition number near 1. This transformation is performed by applying a second matrix, called a preconditioner , to the system.
This process transforms the linear system. The ideal preconditioner transforms the coefficient matrix A into an identity matrix, since any iterative method will converge in one iteration with such a preconditioner. In practice, finding a good preconditioner requires trade-offs. The transformation is performed in one of three ways: left preconditioning, right preconditioning, or split preconditioning. The first case is called left preconditioning since the preconditioner matrix M appears on the left of A :.
In right preconditioning , M appears on the right of A :. Finally, for symmetric coefficient matrices A , split preconditioning ensures that the transformed system is still symmetric. The solver algorithm for split preconditioned systems is based on the above equation, but in practice there is no need to compute H. The solver algorithm multiplies and solves with M directly.
In all cases, the preconditioner M is chosen to accelerate convergence of the iterative method. When the residual error of an iterative solution stagnates or makes little progress between iterations, it often means you need to generate a preconditioner matrix to incorporate into the problem.
In some cases, preconditioners occur naturally in the mathematical model of a given problem. In the absence of natural preconditioners, you can use one of the incomplete factorizations in this table to generate a preconditioner matrix. Incomplete factorizations are essentially incomplete direct solves that are quick to calculate. See Incomplete Factorizations for more information about ilu and ichol. Consider the five-point finite difference approximation to Laplace's equation on a square, two-dimensional domain.
For this system, pcg is unable to find a solution without specifying a preconditioner matrix. However, using a different preconditioner can yield better results. For example, using ichol to construct a modified incomplete Cholesky allows pcg to meet the specified tolerance after only 39 iterations.
For computationally tough problems, you might need a better preconditioner than the one generated by ilu or ichol directly. For example, you might want to generate a better quality preconditioner or minimize the amount of computation being done.
In these cases, you can use equilibration to make the coefficient matrix more diagonally dominant which can lead to a better quality preconditioner and reordering to minimize the number of nonzeros in matrix factors which can reduce memory requirements and may improve the efficiency of subsequent calculations. Use equilibrate on the coefficient matrix. Reorder the equilibrated matrix using a sparse matrix reordering function, such as dissect or symrcm.
Generate the final preconditioner using ilu or ichol. Here is an example that uses equilibration and reordering to generate a preconditioner for a sparse coefficient matrix.
0コメント