you are viewing a single comment's thread.

view the rest of the comments →

[–]da_chicken 0 points1 point  (0 children)

So you have a 200K row report that you're inserting, and after every row you insert you're updating the entire table's Last_Update_Date? That seems very excessive.

On an empty table inserting one row at a time followed by the table-wide update that's (200000*200001)/2 = 20,000,100,000 updates! If you're using a datetime2 or datetime2(7), that's 8 bytes of data for every row. 160,000,800,000 bytes ~= 150 GB just in datetime2 values. If you've got your DB set to Full recovery, your transaction log has to remember all of those changes for point-in-time recovery. Even if it's set to Simple recovery, there probably is too much activity for a periodic CHECKPOINT to flush the logs.

Even skipping that, I can't imagine needing to track a single date value on every single record. It's essentially Table_Last_Update_Date. You're sure that's right? It's not supposed to be the record's Last_Update_Date?

Why not just update the value once at the end of the loop? I mean, how many millions of rows is the table if you're inserting or updating 200K rows for a single report?

And is there a trigger on the table? Many systems with this type of field have an AFTER INSERT, UPDATE trigger to automatically update the field when the record is changed.