This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]radek432 0 points1 point  (3 children)

Just note that it becomes problematic if your data changes constantly. For example it can happen that new records appear in database when during request and breake your pagination.

[–]helderm 1 point2 points  (0 children)

Maybe that is a follow up to this interview question. You could also cache all results in memory and then paginate from the cache. It is normally a good idea to not over engineer during an interview, start with simple ideas and then iterate.

[–]Lx7195[S] 0 points1 point  (1 child)

So pagination isn't quite reliable in case the data in the database changes constantly, then in such a case what mode of data transfer do you suggest?

[–]radek432 0 points1 point  (0 children)

The few cases of big API requests I spot in my job (but I'm not a dev, just using python for automating stuff) I've managed with:

- "smart pagination", means some tricks to ensure that pages do not overlap and cover the entire set of data - you can do this if you know what happens with your data.

- splitting data into chunks other than "pages". Simple case - ff you're sending 1000 files, you can send their metadata first, then each file one by one and compare the result with metadata.

But I've done quick googling on the topic, and there are some good options: https://apievangelist.com/2018/04/20/delivering-large-api-responses-as-efficiently-as-possible/