We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
It doesn't matter that much, as you do fewer operations every iteration. There's only minor difference between doing something 1 time in 4 iterations and doing it 4 times in 1 iteration, and clearly input size of 5 is not the case where we should care that much ;)
I'd say it does, at least for larger number of integers(think about real world).
If you have to go trough list 4 times and I have to do it 2 times your program will take twice as much, in worse case scenario.
I think that should be a good part of learning, to make thinks go faster.
And not to say my solution is any faster.
Those kinds of differences aren't as big (and therefore as consequential) as exponential or logarithmic, which is what programmers use to define time complexity. Therefore, we don't care if the time it takes is 4 times the length of the list or 2 times the length of the list, but rather if it is O(n^2) or (e^n), where n is the length of the list. And even if the time complexity is O(b*(n^2)) or O(c*(n^2)), where b and c are constants, we don't care, we simply put it as O(n^2). We care more about that than comparing your "best case" scenario to his "worse case" scenario.
Remove the constant! Should focus on rate-of-growth rather than constant speed operations. :) O(n) = O(2n) = O(3n) etc.. When factoring time complexities of algorithms into your function.
Like rzhang said: How fast does the time it take to complete the operation grow relative to the number of entries in the dataset.
If you have something that grows at a rate of O(n^2) (e.g. nested for loop) you have to analyze every element for every element of an array. (Google bubble sort: worst-case is O(n^2), and we only care about worst-case in most situations)
If you have something that grows at a rate of O(log n) (e.g. Binary Search Tree, B-Tree, Red-Black Tree, etc. I know, I'm getting a little bit into data structures in the algorithms section.) you have the ability to reach what you're looking for, add what you're adding, remove what you're removing without having to look at every single element. (Google heap sort: worst-case is O(n log n))
You should also consider the space complexity (how much memory) of your algorithm. Bubble sort may have a worst-case time complexity of O(n^2), but it's space complexity is O(1), which is the best you can get. Radix sort has a worst-case time complexity of O(nk), but it's space complexity is higher at O(n+k).
Yes, and don't be fooled by thinking that an algorithm that iterates only once over the items is necessarily faster than one that iterates over them multiple times.
The number of operations per iteration plays an important role, so an algorithm that iterates once, but with many operations, can be slower than an algorithm that iterates multiple times with very few operations per iteration, because the latter algorithm may perform fewer total operations.
As a case in point, here is a close Kotlin equivalent of the Python code at the top of this thread:
The code above iterates over the array 3 times: once each for sum, max, and min.
Here's another solution that iterates over the array only once, collecting the running sum, min, and max simultaneously along the way. Yes, you might argue it's a bit contrived, but it's not a totally unreasonable solution, as it proves my point.
The overhead of this approach (1 iteration) causes it to take nearly 10 times longer than the preceding solution (3 iterations). Of course, the timings I took were very rough, and if the array were longer, the overhead of the latter solution might be less significant, but in any case, the latter solution (1 iteration) is clearly and significantly less performant than the former solution (3 iterations).
Here is another solution that iterates over the array only once, but is indeed faster than the first solution (3 iterations) above, by approximately 50% for 5 items (YMMV):
The code is longer than the first solution, and is arguably less readable. It certainly (arguably?) imposes a bit more cognitive load to comprehend than the first solution does. In this case, it could be argued that the first solution is more desirable. In other words, at some point, the difference in performance might not be large enough to give up the arguably clearer solution. Ease of understanding might simply trump performance.
There's one more thing to consider here as well. Because of the simplicity (and clarity) of the first solution, the likelihood of bugs is greatly reduced due to the use of built-in functionality that precludes the need for explicit looping, as given in the third solution. When you use explicit looping, you introduce the chance for one-off bugs, and the like.
Below your immutable version using Scala (i.e. foldLeft). I didn't go through the pain of implementing a mutable version using the for comprehension loop.
Looks like Kotlin and Scala are very similar I don't know the case class equivalent in Kotlin
Nevermind, I have figured it out. everything was right the whole time. I was just using an elif statement instead of an if statement which was screwing yup my code.
There is actually a small error with the single iteration solution.
If the max value in the array ends up being the first element it would have been assigned to min on the first iteration (since it will be less than Long.MAX_VALUE) and never be set as the max.
This is solved by changing the "else if" to just an "if" (then after first iteration min and max will have the same value which is true vacuously)
To put some tangible numbers behind this, I implemented the two functions that are O(3n) and O(n), then also added an implementation that should be much slower (but is also very readable and faster to implement) that is O(nlogn). The results are suprising:
The theoretically fastest solution is also the slowest of the group. I can think of a few microarchitectural reasons why that might be the case including a pair of floating point comparisons, but TLDR: always benchmark and regression test code that actually has to run fast. Don't rely on trivial Big-O analysis.
Mini-Max Sum
You are viewing a single comment's thread. Return to all comments →
It doesn't matter that much, as you do fewer operations every iteration. There's only minor difference between doing something 1 time in 4 iterations and doing it 4 times in 1 iteration, and clearly input size of 5 is not the case where we should care that much ;)
I'd say it does, at least for larger number of integers(think about real world). If you have to go trough list 4 times and I have to do it 2 times your program will take twice as much, in worse case scenario.
I think that should be a good part of learning, to make thinks go faster. And not to say my solution is any faster.
Those kinds of differences aren't as big (and therefore as consequential) as exponential or logarithmic, which is what programmers use to define time complexity. Therefore, we don't care if the time it takes is 4 times the length of the list or 2 times the length of the list, but rather if it is O(n^2) or (e^n), where n is the length of the list. And even if the time complexity is O(b*(n^2)) or O(c*(n^2)), where b and c are constants, we don't care, we simply put it as O(n^2). We care more about that than comparing your "best case" scenario to his "worse case" scenario.
Remove the constant! Should focus on rate-of-growth rather than constant speed operations. :) O(n) = O(2n) = O(3n) etc.. When factoring time complexities of algorithms into your function.
Like rzhang said: How fast does the time it take to complete the operation grow relative to the number of entries in the dataset.
If you have something that grows at a rate of O(n^2) (e.g. nested for loop) you have to analyze every element for every element of an array. (Google bubble sort: worst-case is O(n^2), and we only care about worst-case in most situations)
If you have something that grows at a rate of O(log n) (e.g. Binary Search Tree, B-Tree, Red-Black Tree, etc. I know, I'm getting a little bit into data structures in the algorithms section.) you have the ability to reach what you're looking for, add what you're adding, remove what you're removing without having to look at every single element. (Google heap sort: worst-case is O(n log n))
You should also consider the space complexity (how much memory) of your algorithm. Bubble sort may have a worst-case time complexity of O(n^2), but it's space complexity is O(1), which is the best you can get. Radix sort has a worst-case time complexity of O(nk), but it's space complexity is higher at O(n+k).
thankes:)
Yeah, but most algorithms for this problem would be Θ(n), in which case the constant would matter because we're aiming for marginal speed ups.
Yes, and don't be fooled by thinking that an algorithm that iterates only once over the items is necessarily faster than one that iterates over them multiple times.
The number of operations per iteration plays an important role, so an algorithm that iterates once, but with many operations, can be slower than an algorithm that iterates multiple times with very few operations per iteration, because the latter algorithm may perform fewer total operations.
As a case in point, here is a close Kotlin equivalent of the Python code at the top of this thread:
The code above iterates over the array 3 times: once each for
sum
,max
, andmin
.Here's another solution that iterates over the array only once, collecting the running sum, min, and max simultaneously along the way. Yes, you might argue it's a bit contrived, but it's not a totally unreasonable solution, as it proves my point.
The overhead of this approach (1 iteration) causes it to take nearly 10 times longer than the preceding solution (3 iterations). Of course, the timings I took were very rough, and if the array were longer, the overhead of the latter solution might be less significant, but in any case, the latter solution (1 iteration) is clearly and significantly less performant than the former solution (3 iterations).
Here is another solution that iterates over the array only once, but is indeed faster than the first solution (3 iterations) above, by approximately 50% for 5 items (YMMV):
The code is longer than the first solution, and is arguably less readable. It certainly (arguably?) imposes a bit more cognitive load to comprehend than the first solution does. In this case, it could be argued that the first solution is more desirable. In other words, at some point, the difference in performance might not be large enough to give up the arguably clearer solution. Ease of understanding might simply trump performance.
There's one more thing to consider here as well. Because of the simplicity (and clarity) of the first solution, the likelihood of bugs is greatly reduced due to the use of built-in functionality that precludes the need for explicit looping, as given in the third solution. When you use explicit looping, you introduce the chance for one-off bugs, and the like.
Nice immutable implementation
Below your
immutable
version using Scala (i.e. foldLeft). I didn't go through the pain of implementing amutable
version using the for comprehension loop.Looks like Kotlin and Scala are very similar I don't know the
case class
equivalent in KotlinThat's really cool, but can you please help me with my code in python. You seem very smart. (I'm beguinner btw and only "know" python)
Sorry. I don't know Python. I only know Java properly
ok. Thanks for letting me know..
Nevermind, I have figured it out. everything was right the whole time. I was just using an elif statement instead of an if statement which was screwing yup my code.
There is actually a small error with the single iteration solution.
If the max value in the array ends up being the first element it would have been assigned to min on the first iteration (since it will be less than Long.MAX_VALUE) and never be set as the max.
This is solved by changing the "else if" to just an "if" (then after first iteration min and max will have the same value which is true vacuously)
Good catch. Corrected.
To put some tangible numbers behind this, I implemented the two functions that are O(3n) and O(n), then also added an implementation that should be much slower (but is also very readable and faster to implement) that is O(nlogn). The results are suprising:
O(nlogn + 2n): 1.2469678640000001 O(3n): 1.1900847209999998 O(n): 1.2989587239999998
The theoretically fastest solution is also the slowest of the group. I can think of a few microarchitectural reasons why that might be the case including a pair of floating point comparisons, but TLDR: always benchmark and regression test code that actually has to run fast. Don't rely on trivial Big-O analysis.
yes