[sql-server] How to delete large data of table in SQL without log?

I have a large data table. There are 10 million records in this table.

What is the best way for this query

   Delete LargeTable where readTime < dateadd(MONTH,-7,GETDATE())

This question is related to sql-server sql-server-2008 sql-optimization

The answer is


If you are using SQL server 2016 or higher and if your table is having partitions created based on column you are trying to delete(for example Timestamp column), then you could use this new command to delete data by partitions.

TRUNCATE TABLE WITH ( PARTITIONS ( { | } [ , ...n ] ) )

This will delete the data in selected partition(s) only and should be the most efficient way to delete data from part of table since it will not create transaction logs and will be done just as fast as regular truncate but without having all the data deleted from the table.

Drawback is if your table is not setup with partition, then you need to go old school and delete the data with regular approach and then recreate the table with partitions so that you can do this in future, which is what I did. I added the partition creation and deletion into insertion procedure itself. I had table with 500 million rows so this was the only option to reduce deletion time.

For more details refer to below links: https://docs.microsoft.com/en-us/sql/t-sql/statements/truncate-table-transact-sql?view=sql-server-2017

SQL server 2016 Truncate table with partitions

Below is what I did first to delete the data before I could recreate the table with partitions with required data in it. This query will run for days during specified time window until the data is deleted.

:connect <<ServerName>>
use <<DatabaseName>>

SET NOCOUNT ON;
DECLARE @Deleted_Rows INT;
DECLARE @loopnum INT;
DECLARE @msg varchar(100);
DECLARE @FlagDate datetime;
SET @FlagDate =  getdate() - 31;
SET @Deleted_Rows = 1;
SET @loopnum = 1;

/*while (getdate() < convert(datetime,'2018-11-08 14:00:00.000',120))
BEGIN
    RAISERROR( 'WAIT for START' ,0,1) WITH NOWAIT   
    WAITFOR DELAY '00:10:00'
END*/
RAISERROR( 'STARTING PURGE' ,0,1) WITH NOWAIT   

WHILE (1=1)
BEGIN
    WHILE (@Deleted_Rows > 0 AND (datepart(hh, getdate() ) >= 12 AND datepart(hh, getdate() ) <= 20)) -- (getdate() < convert(datetime,'2018-11-08 19:00:00.000',120) )
      BEGIN
       -- Delete some small number of rows at a time
         DELETE TOP (500000)  dbo.<<table_name>>
         WHERE timestamp_column < convert(datetime, @FlagDate,102)
         SET @Deleted_Rows = @@ROWCOUNT;
         WAITFOR DELAY '00:00:01'
         select @msg = 'ROWCOUNT' + convert(varchar,@Deleted_Rows);
         set @loopnum = @loopnum + 1
         if @loopnum > 1000
             begin 
                 begin try
                        DBCC SHRINKFILE (N'<<databasename>>_log' , 0, TRUNCATEONLY)
                        RAISERROR( @msg ,0,1) WITH NOWAIT
                 end try
                 begin catch
                     RAISERROR( 'DBCC SHRINK' ,0,1) WITH NOWAIT  
                 end catch
                 set @loopnum = 1
             end
        END
WAITFOR DELAY '00:10:00'
END 
select getdate()

@Francisco Goldenstein, just a minor correction. The COMMIT must be used after you set the variable, otherwise the WHILE will be executed just once:

DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;

WHILE (@Deleted_Rows > 0)
BEGIN
    BEGIN TRANSACTION

    -- Delete some small number of rows at a time
    DELETE TOP (10000)  LargeTable 
    WHERE readTime < dateadd(MONTH,-7,GETDATE())

    SET @Deleted_Rows = @@ROWCOUNT;

    COMMIT TRANSACTION
    CHECKPOINT -- for simple recovery model

END

You can also use GO + how many times you want to execute the same query.

DELETE TOP (10000)  [TARGETDATABASE].[SCHEMA].[TARGETTABLE] 
WHERE readTime < dateadd(MONTH,-1,GETDATE());
-- how many times you want the query to repeat
GO 100

This variation of M.Ali's is working fine for me. It deletes some, clears the log and repeats. I'm watching the log grow, drop and start over.

DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;
WHILE (@Deleted_Rows > 0)
  BEGIN
   -- Delete some small number of rows at a time
    delete top (100000) from InstallLog where DateTime between '2014-12-01' and '2015-02-01'
    SET @Deleted_Rows = @@ROWCOUNT;
    dbcc shrinkfile (MobiControlDB_log,0,truncateonly);
END

Another use:

SET ROWCOUNT 1000 -- Buffer

DECLARE @DATE AS DATETIME = dateadd(MONTH,-7,GETDATE())

DELETE LargeTable  WHERE readTime < @DATE
WHILE @@ROWCOUNT > 0
BEGIN
   DELETE LargeTable  WHERE readTime < @DATE
END
SET ROWCOUNT 0

Optional;

If transaction log is enabled, disable transaction logs.

ALTER DATABASE dbname SET RECOVERY SIMPLE;

@m-ali answer is right but also keep in mind that logs could grow a lot if you don't commit the transaction after each chunk and perform a checkpoint. This is how I would do it and take this article http://sqlperformance.com/2013/03/io-subsystem/chunk-deletes as reference, with performance tests and graphs:

DECLARE @Deleted_Rows INT;
SET @Deleted_Rows = 1;


WHILE (@Deleted_Rows > 0)
  BEGIN

   BEGIN TRANSACTION

   -- Delete some small number of rows at a time
     DELETE TOP (10000)  LargeTable 
     WHERE readTime < dateadd(MONTH,-7,GETDATE())

     SET @Deleted_Rows = @@ROWCOUNT;

   COMMIT TRANSACTION
   CHECKPOINT -- for simple recovery model
END

If i say without loop, i can use GOTO statement for delete large amount of records using sql server. exa.

 IsRepeat:
    DELETE TOP (10000)
    FROM <TableName>
    IF @@ROWCOUNT > 0
         GOTO IsRepeat

like this way you can delete large amount of data with smaller size of delete.

let me know if requires more information.


If you are willing (and able) to implement partitioning, that is an effective technique for removing large quantities of data with little run-time overhead. Not cost-effective for a once-off exercise, though.


Shorter syntax

select 1
WHILE (@@ROWCOUNT > 0)
BEGIN
  DELETE TOP (10000) LargeTable 
  WHERE readTime < dateadd(MONTH,-7,GETDATE())
END

I was able to delete 19 million rows from my table of 21 million rows in matter of minutes. Here is my approach.

If you have a auto-incrementing primary key on this table, then you can make use of this primary key.

  1. Get minimum value of primary key of the large table where readTime < dateadd(MONTH,-7,GETDATE()). (Add index on readTime, if not already present, this index will anyway be deleted along with the table in step 3.). Lets store it in a variable 'min_primary'

  2. Insert all the rows having primary key > min_primary into a staging table (memory table if no. of rows is not large).

  3. Drop the large table.

  4. Recreate the table. Copy all the rows from staging table to main table.

  5. Drop the staging table.


You can delete small batches using a while loop, something like this:

DELETE TOP (10000)  LargeTable 
WHERE readTime < dateadd(MONTH,-7,GETDATE())
WHILE @@ROWCOUNT > 0
BEGIN
    DELETE TOP (10000)  LargeTable 
    WHERE readTime < dateadd(MONTH,-7,GETDATE())
END