Sunday, April 7, 2024

Introduction to Optimization(Unit 3)


Syllabus(as per CU): 





Introduction to Optimization: 

 

·  In calculus and mathematics, the optimization problem is also termed as mathematical programming. To describe this problem in simple words, it is the mechanism through which we can find an element, variable or quantity that best fits a set of given criterion or constraints.

·   Optimization : The act of obtaining the best result under the given circumstances. Design, construction and maintenance of engineering systems involve decision making both at the managerial and the technological level. Goals of such decisions :–– to minimize the effort required or to maximize the desired benefit.

·       Defined as the process of finding the conditions that give the minimum or maximum value of a function, where the function represents the effort required or the desired benefit.

·       Existence of optimization methods can be traced to the days of Newton, Lagrange, and Cauchy.

·        Key Components of Optimization:

      Objective Function:

·        This is the function that needs to be either maximized or minimized. It represents the quantity of interest in a given problem. For example, in manufacturing, the objective might be to minimize production costs, while in project scheduling, the objective could be to maximize resource utilization.

    Decision Variables:

·        These are the variables that can be adjusted or controlled to influence the outcome of the objective function. The goal is to find the values of these variables that optimize the objective function.

    Constraints:

Constraints are conditions or limitations that must be satisfied during the optimization process. They represent real-world restrictions on the values the decision variables can take. Constraints play a crucial role in defining the feasible solution space.

 

  

Types of Optimization Problems & Techniques:

·       In general, optimization is the process of finding and selecting the optimal element from a set of feasible candidates.

·       In mathematical optimization, this problem is usually formulated as determining the extreme value of a function of a given domain.

·       An extreme value, or an optimal value, can refer to either the minimum or maximum of the function, depending on the application and the specific problem.

·       optimization problems refer to finding the most appropriate solution out of all feasible solutions.

·       The optimization problem can be defined as a computational situation where the objective is to find the best of all possible solutions.

·       Using optimization to solve design problems provides unique insights into situations. The model can compare the current design to the best possible and includes information about limitations and implied costs of arbitrary rules and policy decisions.

·       A well-designed optimization model can also aid in what-if analysis, revealing where improvements can be made or where trade-offs may need to be made. The application of optimization to engineering problems spans multiple disciplines.

Optimization is divided into different categories. The first is a statistical technique, while the second is a probabilistic method. A mathematical algorithm is used to evaluate a set of data models and choose the best solution. The problem domain is specified by constraints, such as the range of possible values for a function. A function evaluation must be performed to find the optimum solution. Optimal solutions will have a minimal error, so the minimum error is zero.

Optimization Problems

There are different types of optimization problems. A few simple ones do not require formal optimization, such as problems with apparent answers or with no decision variables. But in most cases, a mathematical solution is necessary, and the goal is to achieve optimal results. Most problems require some form of optimization. The objective is to reduce a problem’s cost and minimize the risk. It can also be multi-objective and involve several decisions.

There are three main elements to solve an optimization problem: an objective, variables, and constraints. Each variable can have different values, and the aim is to find the optimal value for each one. The purpose is the desired result or goal of the problem.

Let us walk through the various optimization problem depending upon varying elements.

1. Maximization Vs. Minimization Problems

The simplest cases of optimization problems are minimization or maximization of scalar functions. If we have a scalar function of one or more variables, f(x_1, x_2, … x_n) then the following is an optimization problem:

Find x_1, x_2, …, x_n where f(x) is minimum

Or we can have an equivalent maximization problem.

When we define functions quantifying errors or penalties, we apply a minimization problem. On the other hand, if a learning algorithm constructs a function modeling the accuracy of a method, we would maximize this function.

Many automated software tools for optimization, generally implement either a maximization problem or a minimization task but not both. Hence, we can convert a maximization problem to a minimization problem (and vice versa) by adding a negative sign to f(x), i.e., 

Maximize f(x) w.r.r x is equivalent to Minimize -f(x) w.r.t. x

2. Global Vs. Local Optimum Points

In machine learning, we often encounter functions, which are highly non-linear with a complex landscape. It is possible that there is a point where the function has the lowest value within a small or local region around that point. Such a point is called a local minimum point.

This is opposed to global minimum point, which is a point where the function has the least value over its entire domain. The following figure shows local and global maximum points.



3.     Unconstrained  Vs. Constrained Optimization

There are many problems in machine learning, where we are interested in finding the global optimum point without any constraints or restrictions on the region in space. Such problems are called unconstrained optimization problems.

At times we have to solve an optimization problem subject to certain constraints. Such optimization problems are termed as constrained optimization problems.

For example:

Minimize x^2 + y^2     subject to.       x + y <= 1 

Examples of constrained optimization are:

  1. Find minimum of a function when the sum of variables in the domain must sum to one
  2. Find minimum of a function such that certain vectors are normal to each other
  3. Find minimum of a function such that certain domain variables lie in a certain range.

 

Feasible Region

All the points in space where the constraints on the problem hold true comprise the feasible region. An optimization algorithm searches for optimal points in the feasible region.

For an unconstrained optimization problem, the entire domain of the function is a feasible region.

Equality Vs. Inequality Constraints

The constraints imposed in an optimization problem could be equality constraints or inequality constraints. The figure below shows the two types of constraints.

4.     Linear Vs. Non-linear Programming

·       An optimization problem where the function is linear and all equality or inequality constraints are also linear constraints is called a linear programming problem.

·       If either the objective function is non-linear or one or more than one constraints is non-linear, then we have a non-linear programming problem.

·       To visualize the difference between linear and non-linear functions you can check out the figure below.

Linear vs. non-linear functions


5.     Continuous Optimization versus Discrete Optimization

Models with discrete variables are discrete optimization problems, while models with continuous variables are continuous optimization problems. Constant optimization problems are easier to solve than discrete optimization problems. A discrete optimization problem aims to look for an object such as an integer, permutation, or graph from a countable set. However, with improvements in algorithms coupled with advancements in computing technology, there has been an increase in the size and complexity of discrete optimization problems that can be solved efficiently. It is to note that Continuous optimization algorithms are essential in discrete optimization because many discrete optimization algorithms generate a series of continuous sub-problems.

6.     None, One, or Many Objectives

Although most optimization problems have a single objective function, there have been peculiar cases when optimization problems have either — no objective function or multiple objective functions.  Multi-objective optimization problems arise in engineering, economics, and logistics streams. Often, problems with multiple objectives are reformulated as single-objective problems.

7.  Deterministic Optimization versus Stochastic Optimization

Deterministic optimization is where the data for the given problem is known accurately. But sometimes, the data cannot be known precisely for various reasons. A simple measurement error can be a reason for that. Another reason is that some data describe information about the future, hence cannot be known with certainty. In optimization under uncertainty, it is called stochastic optimization when the uncertainty is incorporated into the model.

8. Linear Programming:

In linear programming (LP) problems, the objective and all of the constraints are linear functions of the decision variables.

As all linear functions are convex, solving linear programming problems is innately easier than non- linear problems.


9. Quadratic Programming:

In the quadratic programming (QP) problem, the objective is a quadratic function of the decision variables, and the constraints are all linear functions of the variables.

A widely used Quadratic Programming problem is the Markowitz mean-variance portfolio optimization problem. The objective is the portfolio variance, and the linear constraints dictate a lower bound for portfolio return.




Linear and Quadratic programming

We all abide by optimization since it is a way of life. We all want to make the most of our available time and make it productive. Optimization finds its use from time usage to solving supply chain problems. Previously we have learned that optimization refers to finding the best possible solutions out of all feasible solutions. Optimization can be further divided into Linear programming and Quadratic programming. Let us take a walkthrough.

Linear Programming

Linear programming is a simple technique to find the best outcome or optimum points from complex relationships depicted through linear relationships. The actual relationships could be much more complicated, but they can be simplified into linear relationships.
Linear programming is a widely used in optimization for several reasons, which can be:

  • In operation research, complex real-life problems can be expressed as linear programming problems.
  • Many algorithms in specific optimization problems operate by solving Linear Programming problems as sub-problems.
  • Many key concepts of optimization theory, such as duality, decomposition, convexity, and convexity generalizations, have been inspired by or derived from ideas of Linear programming.
  • The early formation of microeconomics witnessed the usage of Linear programming, and it is still used in departments of planning, pro production, transportation, technology, etc.

Quadratic Programming

Quadratic programming is the method of solving a particular optimization problem, where it optimizes (minimizes or maximizes) a quadratic objective function subject to one or more linear constraints. Sometimes, quadratic programming can be referred to as nonlinear programming. The objective function in QP may carry bilinear or up to second-order polynomial terms. The constraints are usually linear and can be both equalities and inequalities. Quadratic Programming is widely used in optimization. Reasons being:

  • Image and signal processing
  • Optimization of financial portfolios
  • Performing the least-squares method of regression
  • Controlling scheduling in chemical plants
  • Solving more complex non-linear programming problems
  • Usage in operations research and statistical work

 

Types of Optimization Techniques

There are many types of mathematical and computational optimization techniques. An essential step in the optimization technique is to categorize the optimization model since the algorithms used for solving optimization problems are customized as per the nature of the problem.

Integer programming, for example, is a form of mathematical programming. This technique can be traced back to Archimedes, who first described the problem of determining the composition of a herd of cattle. Advances in computational codes and theoretical research led to its formal development. Listed below are some examples of problems that can be solved with integer programming.

Genetic algorithms (GANs) are another mathematical and computational optimization technique. These algorithms use the same mathematical principles to optimize complex systems. The main principle behind GAs is to minimize a linear objective function while minimizing the cost. This type of algorithm also relies on satisfying linear inequality constraints. On the other hand, nonlinear algorithms use real numbers and nonlinear functions. These algorithms are often more complex than the simplest version.

Different forms of genetic algorithms are widely used for calculating the optimal solution to a problem. Genetic algorithms, for example, have been widely used for decades. Genetic algorithms (GCRs), genetic algorithms (GMOs), and constrained optimization (LP) are two of the most commonly used methods. Genetic algorithms have also revolutionized the way algorithms solve optimization problems. They can help in maximizing the yields of a given product or service.

The term optimization is a synonym for computer programming. The field combines the study of optimization problems’ mathematical structure, the invention of methods for solving them, and implementation on computers. The complexity and size of optimization problems have increased with the development of faster computers. As a result, the development of these techniques has followed a similar pattern. This is particularly true of genetic algorithms, which have several biological and chemical research applications.

  

Examples of Optimization in Machine Learning

Listed below are some well-known machine learning algorithms that employ optimization. You should keep in mind that almost all machine learning algorithms employ some kind of optimization.

  1. Gradient descent in neural networks (unconstrained optimization).
  2. Method of Lagrange multipliers in support vector machines (constrained optimization).
  3. Principal component analysis (constrained optimization) 
  4. Clustering via expectation maximization algorithm (constrained optimization)
  5. Logistic regression (unconstrained optimization)
  6. Genetic algorithms in evolutionary learning algorithms (different variants exist to solve both constrained and unconstrained optimization problems).

 

Engineering Application of Optimization:

1.  Design of structural units in construction, machinery, and in space vehicles.

2. Maximizing benefit/minimizing product costs in various manufacturing and construction

     processes.

3. Optimal path finding in road networks/freight handling processes.

4. Optimal production planning, controlling and scheduling.

5. Optimal Allocation of resources or services among several activities to maximize the benefit.

 

 

Art of Modeling: Model Building

Development of an optimization model can be divided into five

 major phases.

  • ·       Collection of data
  • ·       Problem definition and formulation
  • ·       Model development
  • ·   Model validation and evaluation or performance– Model application
  •  aand interpretation of results

  1. a)     Data collection

–may be time-consuming but is the fundamental basis of the model-

building process

–extremely important phase of the model-building process

– the availability and accuracy of data can have considerable effect

 on the accuracy of the model and on the ability to evaluate the

 model.

  1. b)     Problem definition and formulation

·       steps involved:

– identification of the decision variables;

– formulation of the model objective(s);

– the formulation of the model constraints.

·       In performing these steps one must consider the following.

– Identify the important elements that the problem consists of.

Determine the number of independent variables, the number  of

   equations required to describe the system, and the number 

   of unknown parameters.

– Evaluate the structure and complexity of the model

– Select the degree of accuracy required of the model

 

  1.  c) Model development includes:–

–the mathematical description,

– parameter estimation,

– input development, and

– software development

The model development phase is an iterative process that may

 require returning to the model definition and formulation phase.

d d) Model Validation and Evaluation

– This phase is checking the model as a whole.

– Model

 validation consists of validation of the assumptions and parameters

 of the model.

– The performance of the model is to be evaluated using standard

 performance measures such as Root mean squared error and R2

 value.

– Sensitivity analysis to test the model inputs and parameters.

– This phase also is an iterative process and may require returning to

 the model definition and formulation phase.

– One important aspect of this process is that in most cases data used

 in the formulation process should be different from that used in

 validation



Modeling Techniques

Different modeling techniques are developed to meet the requirement

 of different type of optimization problems. Major categories of

 modeling approaches are:

– classical optimization techniques,

– linear programming, – nonlinear programming,

– geometric programming, – dynamic programming,

– integer programming,

– stochastic programming,

 – evolutionary algorithms, etc.

  

Optimization Problems & Techniques:

·   In general, optimization is the process of finding and selecting the optimal element from a set of feasible candidates.

· In mathematical optimization, this problem is usually formulated as determining the extreme value of a function of a given domain.

·  An extreme value, or an optimal value, can refer to either the minimum or maximum of the function, depending on the application and the specific problem.

· Optimization problems refer to finding the most appropriate solution out of all feasible solutions.

· The optimization problem can be defined as a computational situation where the objective is to find the best of all possible solutions.

· Using optimization to solve design problems provides unique insights into situations. The model can compare the current design to the best possible and includes information about limitations and implied costs of arbitrary rules and policy decisions.

·  A well-designed optimization model can also aid in what-if analysis, revealing where improvements can be made or where trade-offs may need to be made. The application of optimization to engineering problems spans multiple disciplines.

Optimization is divided into different categories. The first is a statistical technique, while the second is a probabilistic method. A mathematical algorithm is used to evaluate a set of data models and choose the best solution. The problem domain is specified by constraints, such as the range of possible values for a function. A function evaluation must be performed to find the optimum solution. Optimal solutions will have a minimal error, so the minimum error is zero.

Importing Modules

Here we use the optimize module from the SciPy library. Here we

 assume that this module is imported in the following manner:

In [1]: from scipy import optimize

In the later part of this chapter we also look at linear programming

using the cvxopt library, which we assume to be imported in its

entirety without any alias:

In [2]: import cvxopt

For basic numerics, symbolics, and plotting, here we also use the NumPy, SymPy, and Matplotlib libraries, which are imported and initialized.

In [3]: import matplotlib.pyplot as plt

In [4]: import numpy as np

In [5]: import sympy

In [6]: sympy.init_printing()

No comments:

Post a Comment

PYTHON PRACTICE QUESTIONS

  Write a program that checks if a number is even or odd using the modulo operator and an if-else statement. Create a loo...