I need to initialize a new field with the value -1 in a 120 Million record table.
Update table
set int_field = -1;
I let it run for 5 hours before canceling it.
I tried running it with transaction level set to read uncommitted with the same results.
Recovery Model = Simple.
MS SQL Server 2005
Any advice on getting this done faster?
This question is related to
sql
sql-server
sql-server-2005
If you have the disk space, you could use SELECT INTO and create a new table. It's minimally logged, so it would go much faster
select t.*, int_field = CAST(-1 as int)
into mytable_new
from mytable t
-- create your indexes and constraints
GO
exec sp_rename mytable, mytable_old
exec sp_rename mytable_new, mytable
drop table mytable_old
Sounds like an indexing problem, like Pabla Santa Cruz mentioned. Since your update is not conditional, you can DROP the column and RE-ADD it with a DEFAULT value.
In general, recommendation are next:
But in particular case you should choose the most appropriate solution or their combination.
Also bear in mind that sometime index could be useful e.g. when you perform update of non-indexed column by some condition.
If your int_field is indexed, remove the index before running the update. Then create your index again...
5 hours seem like a lot for 120 million recs.
I break the task up into smaller units. Test with different batch size intervals for your table, until you find an interval that performs optimally. Here is a sample that I have used in the past.
declare @counter int
declare @numOfRecords int
declare @batchsize int
set @numOfRecords = (SELECT COUNT(*) AS NumberOfRecords FROM <TABLE> with(nolock))
set @counter = 0
set @batchsize = 2500
set rowcount @batchsize
while @counter < (@numOfRecords/@batchsize) +1
begin
set @counter = @counter + 1
Update table set int_field = -1 where int_field <> -1;
end
set rowcount 0
If the table has an index which you can iterate over I would put update top(10000)
statement in a while loop moving over the data. That would keep the transaction log slim and won't have such a huge impact on the disk system. Also, I would recommend to play with maxdop
option (setting it closer to 1).
declare @cnt bigint
set @cnt = 1
while @cnt*100<10000000
begin
UPDATE top(100) [Imp].[dbo].[tablename]
SET [col1] = xxxx
WHERE[col1] is null
print '@cnt: '+convert(varchar,@cnt)
set @cnt=@cnt+1
end
What I'd try first is
to drop all constraints, indexes, triggers and full text indexes first before you update.
If above wasn't performant enough, my next move would be
to create a CSV file with 12 million records and bulk import it using bcp.
Lastly, I'd create a new heap table (meaning table with no primary key) with no indexes on a different filegroup, populate it with -1. Partition the old table, and add the new partition using "switch".
When adding a new column ("initialize a new field") and setting a single value to each existing row, I use the following tactic:
ALTER TABLE MyTable
add NewColumn int not null
constraint MyTable_TemporaryDefault
default -1
ALTER TABLE MyTable
drop constraint MyTable_TemporaryDefault
If the column is nullable and you don't include a "declared" constraint, the column will be set to null for all rows.
set rowcount 1000000
Update table set int_field = -1 where int_field<>-1
see how fast that takes, adjust and repeat as necessary
Source: Stackoverflow.com