Update Table1 Set … Where
Update Table2 Set … Where
etc
The entire thing should go in one transaction, i.e. if any part fails the whole thing rolls back, you can make this explicit by bracketting your queries with
Begin Transaction
…
Commit Transaction
When I do these things, I always have to say “GO” after each line or it won’t run properly. But your example does not show this.
Would I have to do this?
Begin Transaction
Go
Update Table1 Set … Where
Go
Update Table2 Set … Where
Go
Commit Transaction
Go
Also, is there a limit on how long a transaction can be? Cause i have a few stored procs that can take hours to run, change every piece of data in the DB, and if any of it goes wrong, its a catastrophic failure requiring a database restore.
It’s a request to the command ui to execute a batch.
Everything since start or last go. Youcan do things like
Insert SillyTable Values (1)
GO 3
and it dues it three times.
How does it not work properly is the thing you need to fix.
Some interfaces don’t like GO at all, others use a semi-colon. I wouldn’t recommend it all for this sort of atomic batch.
As far as I know there’s no actual limit on the size of a tranaction. Memory, tempdb size, number of locks etc wouls be more of a contraint.
Never ran into the issue of one SP to change every piece of data in a db, aside from inec off conversion type manouevres. If I was doing that regular, I’d do non destructive, i.e.
Create a blank DB and then query into it, with may be some sort of check script at the end of it. On sucess swap the original out and the new version in.