Given an array of integers and an integer k
Find the maximum sum possible from the given array that is less than or equal to k
Example:
array = 7,6,9,11 k = 25
Answer:
24
Explanation:
possible combinations:
size 1 = (7),(6),(9),(11)
size 2 = (6,7), (6,9), (6,11), (7,9), (7,11), (9,11)
size 3 = (6,7,9), (6,7,11), (6,9,11), (7,9,11)
size 4 : (6,7,9,11)Among these possible combinations, the sum of items that is maximum and less than or equal to k=25 is (6,7,11)=24
Constraints:
size of the array is 1 to 40
value of k is 1 to 109
value of items in the array are 1 to 109
I am using the code mentioned in the post :
def largest_subset(items, k):
res = 0
# We can form subset with value 0 from empty set,
# items[0], items[0...1], items[0...2]
arr = [[True] * (len(items) + 1)]
for i in range(1, k + 1):
# Subset with value i can't be formed from empty set
cur = [False] * (len(items) + 1)
for j, val in enumerate(items, 1):
# cur[j] is True if we can form a set with value of i from
# items[0...j-1]
# There are two possibilities
# - Set can be formed already without even considering item[j-1]
# - There is a subset with value i - val formed from items[0...j-2]
cur[j] = cur[j-1] or ((i >= val) and arr[i-val][j-1])
if cur[-1]:
# If subset with value of i can be formed store
# it as current result
res = i
arr.append(cur)
return res
It works for a small range of inputs but when input to the program is huge, the code is failing at line: cur = [False] * (len(items) + 1) saying Memory Error.
I also tried another post
def subset_sum(vals, target=0):
sums = {0: [()]} # key=sum, value=list of subsets for the sum
if target in sums:
yield from sums[target] # annoying base case
for val in vals:
items = sums.items() # don't change dict size during iteration
sums = dict(items)
for prev_sum, prev_subsets in items:
sum_ = prev_sum + val
subsets = [s + (val,) for s in prev_subsets]
sums[sum_] = sums.get(sum_, []) + subsets
if sum_ <= target:
yield from subsets
It also fails with same error message at line : subsets = [s + (val,) for s in prev_subsets]
What is the correct approach to solve this problem.
Modifying this answer: Smallest number that cannot be formed from sum of numbers from array
Use bitvectors to accomplish this.
Start with an empty bitvector b. Then for each element x in your array, do this:
b = b | b << x | 2^(x-1)
To be clear, the i'th element is set to 1 to represent the number i, and | x is setting the x'th element to 1.
In doing the above step (b = b | ...), truncate your revised b after k bits since we don't care about making larger values.
After you finish processing the array, the largest index of a set bit in b is your answer (counting from the right, starting at 1).
In this example, for [4,13,2,3,1], we can make all values up to 23, other than 11 & 12.
There's a set bit at every index that can be made by summing a subset of the input array.
This would still work but would need modification if the input array had negative numbers.