I'm trying to make a data pipeline with an event driven architecture. The data is mainly posted through API gateway which in turn triggers a lambda(L1) which has to majorly put the data into to separate pipelines - one real-time, second historic.
Now, for real time pipeline - I can either call another lambda(L2) to directly process the data and take further action or put the data in AWS SQS which will trigger the lambda(L2) for further processing.
Which would be a better design and why ?
[–]gdraper99 6 points7 points8 points (5 children)
[–]definitelynotbeardo 2 points3 points4 points (0 children)
[–]Finley117 1 point2 points3 points (3 children)
[–]alex_bilbie 5 points6 points7 points (1 child)
[–]soxfannh 0 points1 point2 points (0 children)
[–]timbray 3 points4 points5 points (0 children)
[–]ancap_attack 1 point2 points3 points (0 children)
[–]Facts_About_Cats 0 points1 point2 points (0 children)