We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
hello
thanks for sharing, it is pretty efficient.
Why are you using ind=bisect.bisect_left(a1,pr+1) ? I know that the goal is to maximize pr-a1[ind]+m but is it not possible that the maximisation is pr + 2 in few special case ?
Even with this, it was insufficiently fast for me and failed on 14.
The optimization that allowed this method to pass for me was to preinitialize the array. This saves on perhaps many memory operations that would be used to extend the array.
That said, myself I don't quite know what principle it is that allows us to know which prefix to search for based on the current one.
This can be made a little more efficient. You're using bisect_left, followed by insort later, which is a combination of bisect followed by insert. So you're basically repeating the bisect step when you could just use the index gotten from the first bisect step.
You can also avoid a little bit of confusion by using bisect, instead of bisect_left and remove the +1. The following passes all test cases.
I am writing to try to explain how this works and why it's a very clever solution.
let's not consider the if inside the for loop, so the for loop basically just iterate over the combination of elements e.g. a[0], a[0:1], a[0:2] ...
The concern for most people would be all the other combination of elements, and many choose to Brute Force to see those other combination. But this is what makes this answer a clever one.
The secrete is the if statement. The max() is self-evident. pr - a1[ind] + m is the trick. pr is just the remainder after modulo for a[i], so pr + m gives the same remainder but larger than m (the smallest number larger than m that gives remainder to be pr). ind is the index that separate all the previous remainders into two parts, the left part (those remainders less or equal than pr), the right part (those remainders larger than pr). Well, for those remainders that are less or euqal to pr, (pr + m - a1[ind])%m < a1[ind] e.g. if pr=3, a1=[0,1,2,3,4,5], then (pr + m - a1[ind])%m <= 3 so there is not need to consider those part. For the right part, those larger than pr, the reason that it only considers the element at ind, a1[ind] is becaues that (pr + m - a1[ind]) gives a remainder, but the one at ind = bisect.bisect(a1,pr) gives the largest one, e.g. if pr=3, so the ind=4, and (pr + m - a1[ind]) = (3+7 - a1[4]) --> 6, but if we put a1[5] into it the result would be --> 5. 6>5>... is the reason it only considers the element at ind = bisect.bisect(a1,pr). For what is bisect, check this out: https://www.geeksforgeeks.org/bisect-algorithm-functions-in-python/
By subtracting a1[ind] in the (pr + m - a1[ind]) has the same effect as excluding one particular element from the a[i:j] sequence. I think the reason this answer is clever is that, normally people would think about the different combinations of elements, but this answer only consider eliminating one element from a sequence, hence it doesn't think about the nested for loop for an O(n^2) answer. And this oberservation on the key role of the element at ind=bisect.bisect(a1,pr) combining pr-a1[ind]+m is def a killer.
The runtime analysis on this is surprising to me; insert (ie bisect.insort) is O(n)[1] so it reads as though this solution is ~O(n^2), but emprically performs much better than the naive solution doing a simple comparison of all (i,j) pairs.
Would love to hear more thoughts or be corrected here, at first blush this solution feels somewhat like "getting away" with ignoring the BST hoops other languages have to jump through.
Thank you for enlightening us! Nice solution, short and to the point, though a little intuitive so it's kind of unclear what you are doing but I will read up on it and educate myself. Thanks again!!
Cookie support is required to access HackerRank
Seems like cookies are disabled on this browser, please enable them to open this website
Join us
Create a HackerRank account
Be part of a 26 million-strong community of developers
Please signup or login in order to view this challenge
Maximum Subarray Sum
You are viewing a single comment's thread. Return to all comments →
use bisect ........
hello thanks for sharing, it is pretty efficient. Why are you using ind=bisect.bisect_left(a1,pr+1) ? I know that the goal is to maximize pr-a1[ind]+m but is it not possible that the maximisation is pr + 2 in few special case ?
first u read about bisect function which help u alote. bisect function work on sorted list only and make also sorted . bisect function
I see thanks !!
Python 3 solution
it doesn't pass 16th test case.
I passed using the solution on Python3. Maybe you can try using the same code and submit it on Pypy3. It is usually faster than on Python3.
Even with this, it was insufficiently fast for me and failed on 14.
The optimization that allowed this method to pass for me was to preinitialize the array. This saves on perhaps many memory operations that would be used to extend the array.
That said, myself I don't quite know what principle it is that allows us to know which prefix to search for based on the current one.
This can be made a little more efficient. You're using
bisect_left
, followed byinsort
later, which is a combination ofbisect
followed byinsert
. So you're basically repeating thebisect
step when you could just use the index gotten from the firstbisect
step. You can also avoid a little bit of confusion by usingbisect
, instead ofbisect_left
and remove the+1
. The following passes all test cases.I am writing to try to explain how this works and why it's a very clever solution.
let's not consider the
if
inside the for loop, so the for loop basically just iterate over the combination of elements e.g. a[0], a[0:1], a[0:2] ...The concern for most people would be all the other combination of elements, and many choose to Brute Force to see those other combination. But this is what makes this answer a clever one.
The secrete is the
if
statement. The max() is self-evident.pr - a1[ind] + m
is the trick.pr
is just the remainder after modulo for a[i], sopr + m
gives the same remainder but larger than m (the smallest number larger than m that gives remainder to be pr). ind is the index that separate all the previous remainders into two parts, the left part (those remainders less or equal than pr), the right part (those remainders larger than pr). Well, for those remainders that are less or euqal to pr,(pr + m - a1[ind])%m < a1[ind]
e.g. if pr=3, a1=[0,1,2,3,4,5], then(pr + m - a1[ind])%m <= 3
so there is not need to consider those part. For the right part, those larger than pr, the reason that it only considers the element at ind, a1[ind] is becaues that(pr + m - a1[ind])
gives a remainder, but the one at ind = bisect.bisect(a1,pr) gives the largest one, e.g. if pr=3, so the ind=4, and (pr + m - a1[ind]) = (3+7 - a1[4]) --> 6, but if we put a1[5] into it the result would be --> 5. 6>5>... is the reason it only considers the element at ind = bisect.bisect(a1,pr). For what is bisect, check this out: https://www.geeksforgeeks.org/bisect-algorithm-functions-in-python/By subtracting a1[ind] in the
(pr + m - a1[ind])
has the same effect as excluding one particular element from the a[i:j] sequence. I think the reason this answer is clever is that, normally people would think about the different combinations of elements, but this answer only consider eliminating one element from a sequence, hence it doesn't think about the nested for loop for an O(n^2) answer. And this oberservation on the key role of the element at ind=bisect.bisect(a1,pr) combiningpr-a1[ind]+m
is def a killer.can you please tell what is meant by (pr-a1[ind]+m). thank you !
Impressive solution, thanks for sharing.
The runtime analysis on this is surprising to me; insert (ie bisect.insort) is O(n)[1] so it reads as though this solution is ~O(n^2), but emprically performs much better than the naive solution doing a simple comparison of all (i,j) pairs.
Would love to hear more thoughts or be corrected here, at first blush this solution feels somewhat like "getting away" with ignoring the BST hoops other languages have to jump through.
[1] https://wiki.python.org/moin/TimeComplexity
Thank you for enlightening us! Nice solution, short and to the point, though a little intuitive so it's kind of unclear what you are doing but I will read up on it and educate myself. Thanks again!!