[line-breaks] Difference between CR LF, LF and CR line break types?

I'd like to know the difference (with examples if possible) between CR LF (Windows), LF (Unix) and CR (Macintosh) line break types.

This question is related to line-breaks

The answer is


CR - ASCII code 13

LF - ASCII code 10.

Theoretically CR returns cursor to the first position (on the left). LF feeds one line moving cursor one line down. This is how in old days you controled printers and text-mode monitors. These characters are usually used to mark end of lines in text files. Different operating systems used different conventions. As you pointed out Windows uses CR/LF combination while pre-OSX Macs use just CR and so on.


Since there's no answer stating just this, summarized succinctly:

Carriage Return (MAC pre-OSX)

  • CR
  • \r
  • ASCII code 13

Line Feed (Linux, MAC OSX)

  • LF
  • \n
  • ASCII code 10

Carriage Return and Line Feed (Windows)

  • CRLF
  • \r\n
  • ASCII code 13 and then ASCII code 10

If you see ASCII code in a strange format, they are merely the number 13 and 10 in a different radix/base, usually base 8 (octal) or base 16 (hexadecimal).

http://www.bluesock.org/~willg/dev/ascii.html


CR and LF are control characters, respectively coded 0x0D (13 decimal) and 0x0A (10 decimal).

They are used to mark a line break in a text file. As you indicated, Windows uses two characters the CR LF sequence; Unix only uses LF and the old MacOS ( pre-OSX MacIntosh) used CR.

An apocryphal historical perspective:

As indicated by Peter, CR = Carriage Return and LF = Line Feed, two expressions have their roots in the old typewriters / TTY. LF moved the paper up (but kept the horizontal position identical) and CR brought back the "carriage" so that the next character typed would be at the leftmost position on the paper (but on the same line). CR+LF was doing both, i.e. preparing to type a new line. As time went by the physical semantics of the codes were not applicable, and as memory and floppy disk space were at a premium, some OS designers decided to only use one of the characters, they just didn't communicate very well with one another ;-)

Most modern text editors and text-oriented applications offer options/settings etc. that allow the automatic detection of the file's end-of-line convention and to display it accordingly.


This is a good summary I found:

The Carriage Return (CR) character (0x0D, \r) moves the cursor to the beginning of the line without advancing to the next line. This character is used as a new line character in Commodore and Early Macintosh operating systems (OS-9 and earlier).

The Line Feed (LF) character (0x0A, \n) moves the cursor down to the next line without returning to the beginning of the line. This character is used as a new line character in UNIX based systems (Linux, Mac OSX, etc)

The End of Line (EOL) sequence (0x0D 0x0A, \r\n) is actually two ASCII characters, a combination of the CR and LF characters. It moves the cursor both down to the next line and to the beginning of that line. This character is used as a new line character in most other non-Unix operating systems including Microsoft Windows, Symbian OS and others.

Source


Systems based on ASCII or a compatible character set use either LF (Line feed, 0x0A, 10 in decimal) or CR (Carriage return, 0x0D, 13 in decimal) individually, or CR followed by LF (CR+LF, 0x0D 0x0A); These characters are based on printer commands: The line feed indicated that one line of paper should feed out of the printer, and a carriage return indicated that the printer carriage should return to the beginning of the current line.

Here is the details.


Jeff Atwood has a recent blog post about this: The Great Newline Schism

Here is the essence from Wikipedia:

The sequence CR+LF was in common use on many early computer systems that had adopted teletype machines, typically an ASR33, as a console device, because this sequence was required to position those printers at the start of a new line. On these systems, text was often routinely composed to be compatible with these printers, since the concept of device drivers hiding such hardware details from the application was not yet well developed; applications had to talk directly to the teletype machine and follow its conventions. The separation of the two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in one-character time. That is why the sequence was always sent with the CR first. In fact, it was often necessary to send extra characters (extraneous CRs or NULs, which are ignored) to give the print head time to move to the left margin. Even after teletypes were replaced by computer terminals with higher baud rates, many operating systems still supported automatic sending of these fill characters, for compatibility with cheaper terminals that required multiple character times to scroll the display.


NL derived from EBCDIC NL = x'15' which would logically compare to CRLF x'odoa ascii... this becomes evident when physcally moving data from mainframes to midrange. Coloquially (as only arcane folks use ebcdic) NL has been equated with either CR or LF or CRLF


The sad state of "record separators" or "line terminators" is a legacy of the dark ages of computing.

Now, we take it for granted that anything we want to represent is in some way structured data and conforms to various abstractions that define lines, files, protocols, messages, markup, whatever.

But once upon a time this wasn't exactly true. Applications built-in control characters and device-specific processing. The brain-dead systems that required both CR and LF simply had no abstraction for record separators or line terminators. The CR was necessary in order to get the teletype or video display to return to column one and the LF (today, NL, same code) was necessary to get it to advance to the next line. I guess the idea of doing something other than dumping the raw data to the device was too complex.

Unix and Mac actually specified an abstraction for the line end, imagine that. Sadly, they specified different ones. (Unix, ahem, came first.) And naturally, they used a control code that was already "close" to S.O.P.

Since almost all of our operating software today is a descendent of Unix, Mac, or MS operating SW, we are stuck with the line ending confusion.