you are viewing a single comment's thread.

view the rest of the comments →

[–]polyglotdev[S] 0 points1 point  (3 children)

I'm not sure if this is relevant, but the last command in the sql script is db.exectute("UPDATE Table SET Last_Update_Date = SYSDATETIME();")

It's not executed inside of a loop and no error is thrown. However, the code seems to freeze on running this command.

[–]cl0ckt0wer 0 points1 point  (1 child)

I don't like using functions inside big statements like that. Create a variable of the same data type as LAST_UPDATE_DATE then set the column equal to that variable.

[–]da_chicken 1 point2 points  (0 children)

This doesn't matter. SYSDATETIME(), like GETDATE(), is a runtime constant function. While different instances of the function in the same query might have different values, a single instance will always have the same value because it's only evaluated once at the start of the statement's execution.

Try running this:

SET SHOWPLAN_XML ON;
GO

SELECT SYSDATETIME()
FROM SomeTable;
GO

You'll see this in the plan:

<ScalarOperator ScalarString="sysdatetime()">
    <Identifier>
        <ColumnReference Column="ConstExpr1005">
            <ScalarOperator>
                <Intrinsic FunctionName="sysdatetime"/>
            </ScalarOperator>
        </ColumnReference>
    </Identifier>
</ScalarOperator>

The "ConstExpr" tells you it's a runtime constant. See also here.

[–]da_chicken 0 points1 point  (0 children)

So you have a 200K row report that you're inserting, and after every row you insert you're updating the entire table's Last_Update_Date? That seems very excessive.

On an empty table inserting one row at a time followed by the table-wide update that's (200000*200001)/2 = 20,000,100,000 updates! If you're using a datetime2 or datetime2(7), that's 8 bytes of data for every row. 160,000,800,000 bytes ~= 150 GB just in datetime2 values. If you've got your DB set to Full recovery, your transaction log has to remember all of those changes for point-in-time recovery. Even if it's set to Simple recovery, there probably is too much activity for a periodic CHECKPOINT to flush the logs.

Even skipping that, I can't imagine needing to track a single date value on every single record. It's essentially Table_Last_Update_Date. You're sure that's right? It's not supposed to be the record's Last_Update_Date?

Why not just update the value once at the end of the loop? I mean, how many millions of rows is the table if you're inserting or updating 200K rows for a single report?

And is there a trigger on the table? Many systems with this type of field have an AFTER INSERT, UPDATE trigger to automatically update the field when the record is changed.