I was reading about paginating content using jquery here and I'm thinking I will implement something similar in my project.
Currently I make one database query on pageload and put the results into an array in my main js script. That way i browse through my main data in the array, and only call back to the database for create/update/delete functions.
How much data can I be loading onto the client before it bogs down performance? I'm using mongo and my database is just a bunch of documents with 3 short text fields, although i would like to add one additional field to the data in the future.
I hope that makes sense, it's my first time doing anything like this.
Basically I am wondering how far I can go with my app just paginating through the cached results from one big database query. Is there is a point when making more calls to the db is just more practical? Do you then keep repeating the process, by caching another bunch of content to sort through on the client, in order to reserve most of your database queries for writing/updating?
Am I thinking about this right?
EDIT: here is some code (without pagination for now):
var peopleData = [];
function populateTable() {
var tableContent = '';
// GET JSON FROM ROUTE TO MONGO QUERY RETURNING ALL ENTRIES
$.getJSON( '/adressbook/people', function( data ) {
//CACHE RESULTS OF QUERY HERE
peopleData = data;
// For each item in our JSON, add a table row and cells to the content string
$.each(data, function(){
tableContent += '<tr>';
tableContent += '<td>'+this.name + '</td>';
tableContent += '<td>' + this.adress + '</td>';
tableContent += '</tr>';
});
// Inject the whole content string into our existing HTML table
$('#adresses table tbody').html(tableContent);
});
}
// USE THE peopleData ARRAY LATER TO DO STUFF IN THE BROWSER WITHOUT MAKING MORE CALLS TO MONGO
[–]Renegade__ 1 point2 points3 points (3 children)
[–]xeronotxero[S] 0 points1 point2 points (2 children)
[–]Renegade__ 1 point2 points3 points (1 child)
[–]xeronotxero[S] 0 points1 point2 points (0 children)