I hesitate to add yet another answer here as there are already quite a few, but a few points need to be made that have either not been made or not been made clearly.
First: Do not always use NVARCHAR
. That is a very dangerous, and often costly, attitude / approach. And it is no better to say "Never use cursors" since they are sometimes the most efficient means of solving a particular problem, and the common work-around of doing a WHILE
loop will almost always be slower than a properly done Cursor.
The only time you should use the term "always" is when advising to "always do what is best for the situation". Granted that is often difficult to determine, especially when trying to balance short-term gains in development time (manager: "we need this feature -- that you didn't know about until just now -- a week ago!") with long-term maintenance costs (manager who initially pressured team to complete a 3-month project in a 3-week sprint: "why are we having these performance problems? How could we have possibly done X which has no flexibility? We can't afford a sprint or two to fix this. What can we get done in a week so we can get back to our priority items? And we definitely need to spend more time in design so this doesn't keep happening!").
Second: @gbn's answer touches on some very important points to consider when making certain data modeling decisions when the path isn't 100% clear. But there is even more to consider:
Wasting space has a huge cascade effect on the entire system. I wrote an article going into explicit detail on this topic: Disk Is Cheap! ORLY? (free registration required; sorry I don't control that policy).
Third: While some answers are incorrectly focusing on the "this is a small app" aspect, and some are correctly suggesting to "use what is appropriate", none of the answers have provided real guidance to the O.P. An important detail mentioned in the Question is that this is a web page for their school. Great! So we can suggest that:
NVARCHAR
since, over time, it is only getting more likely that names from other cultures will be showing up in those places.VARCHAR
with the appropriate Code Page (which is determined from the Collation of the field).INT
/ TINYINT
since ISO codes are fixed length, human readable, and well, standard :) use CHAR(2)
for two letter codes and CHAR(3)
if using 3 letter codes. And consider using a binary Collation such as Latin1_General_100_BIN2
.VARCHAR
since it is an international standard to never use any letter outside of A-Z. And yes, still use VARCHAR
even if only storing US zip codes and not INT since zip codes are not numbers, they are strings, and some of them have a leading "0". And consider using a binary Collation such as Latin1_General_100_BIN2
.NVARCHAR
since both of those can now contain Unicode characters.Fourth: Now that you have NVARCHAR
data taking up twice as much space than it needs to for data that fits nicely into VARCHAR
("fits nicely" = doesn't turn into "?") and somehow, as if by magic, the application did grow and now there are millions of records in at least one of these fields where most rows are standard ASCII but some contain Unicode characters so you have to keep NVARCHAR
, consider the following:
If you are using SQL Server 2008 - 2016 RTM and are on Enterprise Edition, OR if using SQL Server 2016 SP1 (which made Data Compression available in all editions) or newer, then you can enable Data Compression. Data Compression can (but won't "always") compress Unicode data in NCHAR
and NVARCHAR
fields. The determining factors are:
NCHAR(1 - 4000)
and NVARCHAR(1 - 4000)
use the Standard Compression Scheme for Unicode, but only starting in SQL Server 2008 R2, AND only for IN ROW data, not OVERFLOW! This appears to be better than the regular ROW / PAGE compression algorithm.NVARCHAR(MAX)
and XML
(and I guess also VARBINARY(MAX)
, TEXT
, and NTEXT
) data that is IN ROW (not off row in LOB or OVERFLOW pages) can at least be PAGE compressed, but not ROW compressed. Of course, PAGE compression depends on size of the in-row value: I tested with VARCHAR(MAX) and saw that 6000 character/byte rows would not compress, but 4000 character/byte rows did.If using SQL Server 2005, or 2008 - 2016 RTM and not on Enterprise Edition, you can have two fields: one VARCHAR
and one NVARCHAR
. For example, let's say you are storing URLs which are mostly all base ASCII characters (values 0 - 127) and hence fit into VARCHAR
, but sometimes have Unicode characters. Your schema can include the following 3 fields:
...
URLa VARCHAR(2048) NULL,
URLu NVARCHAR(2048) NULL,
URL AS (ISNULL(CONVERT(NVARCHAR([URLa])), [URLu])),
CONSTRAINT [CK_TableName_OneUrlMax] CHECK (
([URLa] IS NOT NULL OR [URLu] IS NOT NULL)
AND ([URLa] IS NULL OR [URLu] IS NULL))
);
In this model you only SELECT from the [URL]
computed column. For inserting and updating, you determine which field to use by seeing if converting alters the incoming value, which has to be of NVARCHAR
type:
INSERT INTO TableName (..., URLa, URLu)
VALUES (...,
IIF (CONVERT(VARCHAR(2048), @URL) = @URL, @URL, NULL),
IIF (CONVERT(VARCHAR(2048), @URL) <> @URL, NULL, @URL)
);
You can GZIP incoming values into VARBINARY(MAX)
and then unzip on the way out:
COMPRESS
and DECOMPRESS
functions, which are also GZip.If using SQL Server 2017 or newer, you can look into making the table a Clustered Columnstore Index.
While this is not a viable option yet, SQL Server 2019 introduces native support for UTF-8 in VARCHAR
/ CHAR
datatypes. There are currently too many bugs with it for it to be used, but if they are fixed, then this is an option for some scenarios. Please see my post, "Native UTF-8 Support in SQL Server 2019: Savior or False Prophet?", for a detailed analysis of this new feature.