We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
X, A and m:
x_1 + x_2 + .. + x_m = X
a_1 + a_2 + .. + a_m = A
For a given (a_1, a_2, .. a_m), to maximize x_1^a_1 * x_2^a_2 * ... * x_m^a_m, which is Qm(X, (a_1, a_2, .. a_m)).
Question:
When trying to find Qm(X, (a_1, a_2, .. a_m)), is there an existing formula to get Qm(X, (a_1, a_2, .. a_m)), or we need to calculate x_1^a_1 * x_2^a_2 * ... * x_m^a_m for all (x_1, x_2, .. x_m) combinations?
which is an affine hyperplane (or some subset), as pointed out in another comment.
We define the Lagrange function
and look for critical points. These are the where the gradient of vanishes:
The second equation keeps the constraint intact, the first equation, for , can be rearranged into
Summation over yields the equation
which gives the critical points
Inserting into we get
Note that this must not be a maximum or minimum. As in the unconstrained case one must analyze the second order partial derivatives to determine the nature of the critical point .
See here.
Analysis of the hessian is unnecessary. Rephrase the problem as minimization of and note the function is convex in . Lagrange multiplier method gives you the same of course. For convex functions, the derivative is actually a supporting hyperplane. . Choose and on the constraint surface . Then and plugging into the hyperplane inequality, so must be a global minimizer of and hence a global maximizer of .
Actually I followed Lagrange Multiplier formula and reached something like this:
a_1*x_2^a_2 = a_2*x_1^a1
..
a_1*x_m^a_m = a_m*x_1^a1
And If I substitute all other Xs with x_1 in x_1 + x_2 + .. + x_m = X, I got an higher degree equation with a single variable x_1. And it is hard to solve this equation.
How do you deduce to x_i = (X/A)^m * a_i^a_i? Is there any other maths method used?
Project Euler #190: Maximising a weighted product
You are viewing a single comment's thread. Return to all comments →
X, A and m:
x_1 + x_2 + .. + x_m = X
a_1 + a_2 + .. + a_m = A
For a given (a_1, a_2, .. a_m), to maximize x_1^a_1 * x_2^a_2 * ... * x_m^a_m, which is Qm(X, (a_1, a_2, .. a_m)).
Question:
When trying to find Qm(X, (a_1, a_2, .. a_m)), is there an existing formula to get Qm(X, (a_1, a_2, .. a_m)), or we need to calculate x_1^a_1 * x_2^a_2 * ... * x_m^a_m for all (x_1, x_2, .. x_m) combinations?
Hint: What values can the vectors take?
Pretty much unlimited.
So what mathematics theorem applies here?
Given through , optimal values for through can be found in closed form via Lagrange multiplier method https://en.wikipedia.org/wiki/Lagrange_multiplier
You can find that at optimality, and therefore the optimal value
We need to maximize
We define the Lagrange function
Analysis of the hessian is unnecessary. Rephrase the problem as minimization of and note the function is convex in . Lagrange multiplier method gives you the same of course. For convex functions, the derivative is actually a supporting hyperplane. . Choose and on the constraint surface . Then and plugging into the hyperplane inequality, so must be a global minimizer of and hence a global maximizer of .
Thanks for that hint.
Thanks for your hint.
Actually I followed Lagrange Multiplier formula and reached something like this:
a_1*x_2^a_2 = a_2*x_1^a1
..
a_1*x_m^a_m = a_m*x_1^a1
And If I substitute all other Xs with x_1 in x_1 + x_2 + .. + x_m = X, I got an higher degree equation with a single variable x_1. And it is hard to solve this equation.
How do you deduce to x_i = (X/A)^m * a_i^a_i? Is there any other maths method used?
As we are in full spoiler mode now, I added my derivation. :-)
PS markisusmarkmark showed we can use LaTeX/MathJaX markup. Only deviation seems \\\ instead of \\.
PPS The exponent of the first factor is not but .
Actually not full spoiler since my program doesn't work... I think there is a trick I'm missing.
Have you corrected that exponent?
Yes sorry, that was a mistake in my write up but I think it was correct in the code, thank you for the correction
Thanks, @mvanwoerkom.
I made a mistake in calculating gradient, it should be:
Seems different. I do not know how you arrived there.
Can you provide the function you used to take the gradient. And regarding what variables did you take the partial derivatives?
This is the same as the one you arrived. When you have:
It is the same as:
Hence:
And divide the first equation with others:
OK, I understand now. You used .