You are dropped on an alien planet and told to climb the highest mountain.
This is a perfectly good optimization problem. You are given a so-called objective function – in this case your elevation, as a function of latitude and longitude – and your problem is to maximize it. In principle there is an obvious way to solve this problem: just survey every possible latitude and longitude and once you’ve covered the whole planet, name the one that was highest. For that matter you can imagine solving any optimization problem this way, by trying out all possible combinations of input parameters and choosing those that win.
You may protest that this is a bad strategy, and your intuition is right. Implicit in your mountain-climbing mission is the desire to solve the problem efficiently, meaning that your oxygen tank should not have to get exponentially large with the size of the problem. Now in general there is no foolproof trick to be efficient. Instead you must exploit the particular simplifications that your problem allows.
For one thing your objective function, elevation, is fairly simple, and generally goes up and down smoothly as you move around the planet. If you permit me to make this planet somewhat abstract, an objective like this has a simple enough mathematical structure that it can be optimized by calculus alone, without ever leaving your armchair. But the more difficult optimization problems that we often care about usually have a more complicated objective function – for instance, find the place on the planet with the best view. This is pretty hopeless to predict without actually stepping out, but there are nevertheless some regularities. If you’re somewhere on a hill, for example, and the view is good, it’s probably a good idea to walk a bit up the slope and see if it’s any better.
In this case our armchair solution also supposes that our problem is unconstrained, that is, you are allowed to go anywhere on the planet that you wish. Suppose, though, that you want to find the best view within a five-minute walk of your spaceship. Realistic optimization problems are also subject to such constraints: it is usually just not possible to push a parameter up to arbitrary values. With constraints, a lot of naive strategies, like simply climbing up the nearest hill, fail. For some problems, with simple mathematical structures, the constraint can be turned into a sort of third parameter – latitude, longitude, distance-from-spaceship-tude – while for others more exotic strategies must be adopted.
In practice there are relatively few optimization problems that can be solved efficiently. In many cases we can only hope to find a good-enough local optimum in finite search time, trading off between climbing hills in one place and looking for places that might have better hills to climb. An algorithm can be used to quantify such trade-offs, for instance the simulated annealing algorithm was inspired by a metallurgical technique, which alternates heating, to dislodge kinks, with cooling, to let the material settle down around kinks that remain, without any guarantee that the final product is completely kink-free.
There is a second trade-off, this one in modeling, between constructing a mathematical optimization problem that captures the full richness of reality though impossible to solve efficiently, and making enough simplifying assumptions to get a solvable model – better yet, a model simple enough to give interpretable statements about how reality works.