[sql-server-2008] "Primary Filegroup is Full" in SQL Server 2008 Standard for no apparent reason

Our database is currently at 64 Gb and one of our apps started to fail with the following error:

System.Data.SqlClient.SqlException: Could not allocate space for object 'cnv.LoggedUnpreparedSpos'.'PK_LoggedUnpreparedSpos' in database 'travelgateway' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

I double-checked everything: all files in a single filegroup are allowed to autogrow with a reasonable increments (100 Mb for a data file, 10% for a log file), more than 100 Gb of free space is available for the database, tempdb is set to autogrow as well with plenty of free HDD space on its drive.

To resolve a problem, I added second file to the filegroup and the error has gone. But I feel uneasy about this whole situation.

Where' the problem here, guys?

This question is related to sql-server-2008 filegroup

The answer is


Anton,

As a best practice one should n't create user objects in the primary filegroup. When you have bandwidth, create a new file group and move the user objects and leave the system objects in primary.

The following queries will help you identify the space used in each file and the top tables that have highest number of rows and if there are any heaps. Its a good starting point to investigate this issue.

SELECT  
ds.name as filegroupname
, df.name AS 'FileName' 
, physical_name AS 'PhysicalName'
, size/128 AS 'TotalSizeinMB'
, size/128.0 - CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'AvailableSpaceInMB' 
, CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0 AS 'ActualSpaceUsedInMB'
, (CAST(FILEPROPERTY(df.name, 'SpaceUsed') AS int)/128.0)/(size/128)*100. as '%SpaceUsed'
FROM sys.database_files df LEFT OUTER JOIN sys.data_spaces ds  
    ON df.data_space_id = ds.data_space_id;

EXEC xp_fixeddrives
select  t.name as TableName,  
    i.name as IndexName, 
    p.rows as Rows
from sys.filegroups fg (nolock) join sys.database_files df (nolock)
    on fg.data_space_id = df.data_space_id join sys.indexes i (nolock) 
    on df.data_space_id = i.data_space_id join sys.tables t (nolock)
    on i.object_id = t.object_id join sys.partitions p (nolock)
on t.object_id = p.object_id and i.index_id = p.index_id  
where fg.name = 'PRIMARY' and t.type = 'U'  
order by rows desc
select  t.name as TableName,  
    i.name as IndexName, 
    p.rows as Rows
from sys.filegroups fg (nolock) join sys.database_files df (nolock)
    on fg.data_space_id = df.data_space_id join sys.indexes i (nolock) 
    on df.data_space_id = i.data_space_id join sys.tables t (nolock)
    on i.object_id = t.object_id join sys.partitions p (nolock)
on t.object_id = p.object_id and i.index_id = p.index_id  
where fg.name = 'PRIMARY' and t.type = 'U' and i.index_id = 0 
order by rows desc

I found that this happens because: http://support.microsoft.com/kb/913399

SQL Server only releases all the pages that a heap table uses when the following conditions are true: A deletion on this table occurs. A table-level lock is being held. Note A heap table is any table that is not associated with a clustered index.

If pages are not deallocated, other objects in the database cannot reuse the pages.

However, when you enable a row versioning-based isolation level in a SQL Server 2005 database, pages cannot be released even if a table-level lock is being held.

Microsoft's solution: http://support.microsoft.com/kb/913399

To work around this problem, use one of the following methods: Include a TABLOCK hint in the DELETE statement if a row versioning-based isolation level is not enabled. For example, use a statement that is similar to the following:

DELETE FROM TableName WITH (TABLOCK)

Note represents the name of the table. Use the TRUNCATE TABLE statement if you want to delete all the records in the table. For example, use a statement that is similar to the following:

TRUNCATE TABLE TableName

Create a clustered index on a column of the table. For more information about how to create a clustered index on a table, see the "Creating a Clustered Index" topic in SQL

You'll notice at the bottom of the link that it is NOT noted that it applies to SQL Server 2008 but I think it does


In my experience, this message occurs when the primary file (.mdf) has no space to save the metadata of the database. This file include the system tables and they only save their data into it.

Make some space in the file and the commands works again. That's all, Enjoy


I just ran into the same problem. The reason was that the virtual memory file "pagefile.sys" was located on the same drive as our data files for our databases (D: drive). It had doubled in size and filled the disk but windows wasn't picking it up, i.e. it looked like we had 80 GB free when we actually didn't.

Restarting SQL server didn't help, perhaps defragment would give the OS time to free up the pagefile, but we just rebooted the server and voila, the pagefile had shrunk and everything worked fine.

What is interesting is that during the 30 min we were investigating, windows didn't calculate the size of the pagefile.sys at all (80gb). After restart windows did find the pagefile and included it's size in the total disk usage (now 40gb - which is still too big).


please chceck the type of file growth of the database, if its restricted make it unrestricted


our problem was that the hard drive was down to zero space available.


Ran into the same problem and at first defragmenting seemed to work. But it was for just a short while. Turns out the server the customer was using, was running the Express version and that has a licensing limit of about 10gb.

So even though the size was set to "unlimited", it wasn't.


I also ran into the same problem, where the initial dtabase size is set to 4Gb and autogrowth is set by 1Mb. The virtual encrypted TrueCrypt drive that the databse was on, seemed to have plenty of space.

I changed a couple of (the above) things:

  • I turned the Windows service for Sql Server Express from automatic to manual, so only the 'regular' Sql Server is running. (Even though I am running Sql Server 2008 R2 which should allow 10 GB.)
  • I changed the autogrowth from 1 MB to 10%
  • I changed the autogrowth increment-size from 10% to 1000 MB
  • I defragmented the drive
  • I shrank the database:
    • manually DBCC SHRINKDATABASE('...')
    • automatically right click on database | "properties" | "Auto Shrink" | "Truncate log on check point")

All to little avail (I could insert some more records, but soon ran into the same problem). The pagefile mentioned by Tobbi, made me try a larger virtual drive. (Even though my drive should not contain any such system files, since I run without it being mounted a lot of the time.)

  • I made a new larger virtual drive with TrueCrypt

When making this, I ran into a TrueCrypt-question, if I am going to store files larger than 4gb (as shown in this SuperUser question).

  • I told TrueCrypt I would store files larger than 4 GB

After these last two I was doing fine, and I am assuming this last one did the trick. I think TrueCrypt chooses an exfat file system (as described here), which limits all files to 4GB. (So I probably did not need to enlarge the drive after all, but I did anyway.)

This is probably a very rare border case, but maybe it is of help to somebody.


Do one thing, go to properties of database select files and increase initial size of database and set primary filegroup as autoincremented. Restart sql server.

You will be able to use database as earlier.