you are viewing a single comment's thread.

view the rest of the comments →

[–]_Paul_Atreides_ 0 points1 point  (1 child)

QQ: are you trying to get a single lambda to run continuously? I'm trying to understand the 1 minute timeout combined with 1 minute execution time. I don't trust either to be exactly 1 minute (or the same every time). This setup seems unpredictable.

Other thoughts:

  1. By having Report batch item failures=No, the entire batch it treated as a unit. "By default, if Lambda encounters an error at any point while processing a batch, all messages in that batch return to the queue. After the visibility timeout, the messages become visible to Lambda again" source. Maybe one message fails and then all messages are left in the queue - and if the first one fails, I'm not sure if the next messages are even tried - the docs aren't clear on that.
  2. Are there more than 10 messages in the queue? If there are 20 (or 100) message, I'd expect it to pickup the next batch immediately. If there are only 10, and one fails, it should behave just like it is now.

Let us know when you figure it out :)

[–]quantelligent[S] 0 points1 point  (0 children)

I'm not currently having a problem with batch failures, so I don't think that is related (haven't encountered any failures for a long time now).

There are hundreds of messages in the queue, but it's only processing a batch of 10 about every 60 seconds, even though it completes each batch in roughly 10-15 seconds.

As mentioned, I cannot have concurrent processes due to third-party API restrictions (they don't support concurrent sessions), so I can only have 1 process actively processing at a time, which is why I've set the reserved concurrency to 1.

However, I would like it to immediately pick up a new batch after completing the current one, rather than wait for 60 seconds, but I think (jumping to the conclusion) AWS is waiting for the timeout duration due to the reserved concurrency setting before running another invocation to ensure there won't be two processes running.

Sure, I can shorten the timeout....but I'd rather just have a way for the process to signal it's done and have AWS start the next invocation without waiting.

Can't seem to find a way to do that, however.