[sql] Errors in SQL Server while importing CSV file despite varchar(MAX) being used for each column

I'm trying to insert a large CSV file (several gigs) into SQL Server, but once I go through the Import Wizard and finally try to import the file I get the following error report:

  • Executing (Error) Messages

Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Title"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".

(SQL Server Import and Export Wizard)

Error 0xc020902a: Data Flow Task 1: The "Source - Train_csv.Outputs[Flat File Source Output].Columns["Title"]" failed because truncation occurred, and the truncation row disposition on "Source - Train_csv.Outputs[Flat File Source Output].Columns["Title"]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.

(SQL Server Import and Export Wizard)

Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "C:\Train.csv" on data row 2.

(SQL Server Import and Export Wizard)

Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on Source - Train_csv returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.

(SQL Server Import and Export Wizard)

I created the table to insert the file into first, and I set each column to hold varchar(MAX), so I don't understand how I can still have this truncation issue. What am I doing wrong?

This question is related to sql sql-server csv

The answer is


In SQL Server Import and Export Wizard you can adjust the source data types in the Advanced tab (these become the data types of the output if creating a new table, but otherwise are just used for handling the source data).

The data types are annoyingly different than those in MS SQL, instead of VARCHAR(255) it's DT_STR and the output column width can be set to 255. For VARCHAR(MAX) it's DT_TEXT.

So, on the Data Source selection, in the Advanced tab, change the data type of any offending columns from DT_STR to DT_TEXT (You can select multiple columns and change them all at once).

Import and Export Wizard - Data Source - Advanced


Goto Advanced tab----> data type of column---> Here change data type from DT_STR to DT_TEXT and column width 255. Now you can check it will work perfectly.


The Advanced Editor did not resolve my issue, instead I was forced to edit dtsx-file through notepad (or your favorite text/xml editor) and manually replace values in attributes to

length="0" dataType="nText" (I'm using unicode)

Always make a backup of the dtsx-file before you edit in text/xml mode.

Running SQL Server 2008 R2


This answer may not apply universally, but it fixed the occurrence of this error I was encountering when importing a small text file. The flat file provider was importing based on fixed 50-character text columns in the source, which was incorrect. No amount of remapping the destination columns affected the issue.

To solve the issue, in the "Choose a Data Source" for the flat-file provider, after selecting the file, a "Suggest Types.." button appears beneath the input column list. After hitting this button, even if no changes were made to the enusing dialog, the Flat File provider then re-queried the source .csv file and then correctly determined the lengths of the fields in the source file.

Once this was done, the import proceeded with no further issues.


Issue: The Jet OLE DB provider reads a registry key to determine how many rows are to be read to guess the type of the source column. By default, the value for this key is 8. Hence, the provider scans the first 8 rows of the source data to determine the data types for the columns. If any field looks like text and the length of data is more than 255 characters, the column is typed as a memo field. So, if there is no data with a length greater than 255 characters in the first 8 rows of the source, Jet cannot accurately determine the nature of the data type. As the first 8 row length of data in the exported sheet is less than 255 its considering the source length as VARCHAR(255) and unable to read data from the column having more length.

Fix: The solution is just to sort the comment column in descending order. In 2012 onwards we can update the values in Advance tab in the Import wizard.


I think its a bug, please apply the workaround and then try again: http://support.microsoft.com/kb/281517.

Also, go into Advanced tab, and confirm if Target columns length is Varchar(max).


Examples related to sql

Passing multiple values for same variable in stored procedure SQL permissions for roles Generic XSLT Search and Replace template Access And/Or exclusions Pyspark: Filter dataframe based on multiple conditions Subtracting 1 day from a timestamp date PYODBC--Data source name not found and no default driver specified select rows in sql with latest date for each ID repeated multiple times ALTER TABLE DROP COLUMN failed because one or more objects access this column Create Local SQL Server database

Examples related to sql-server

Passing multiple values for same variable in stored procedure SQL permissions for roles Count the Number of Tables in a SQL Server Database Visual Studio 2017 does not have Business Intelligence Integration Services/Projects ALTER TABLE DROP COLUMN failed because one or more objects access this column Create Local SQL Server database How to create temp table using Create statement in SQL Server? SQL Query Where Date = Today Minus 7 Days How do I pass a list as a parameter in a stored procedure? SQL Server date format yyyymmdd

Examples related to csv

Pandas: ValueError: cannot convert float NaN to integer Export result set on Dbeaver to CSV Convert txt to csv python script How to import an Excel file into SQL Server? "CSV file does not exist" for a filename with embedded quotes Save Dataframe to csv directly to s3 Python Data-frame Object has no Attribute (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape How to write to a CSV line by line? How to check encoding of a CSV file