I have lines like these, and I want to know how many lines I actually have...
09:16:39 AM all 2.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 94.00
09:16:40 AM all 5.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 91.00
09:16:41 AM all 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 96.00
09:16:42 AM all 3.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 96.00
09:16:43 AM all 0.00 0.00 1.00 0.00 1.00 0.00 0.00 0.00 98.00
09:16:44 AM all 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
09:16:45 AM all 2.00 0.00 6.00 0.00 0.00 0.00 0.00 0.00 92.00
Is there a way to count them all using linux commands?
This question is related to
linux
bash
command-line
scripting
Use wc
:
wc -l <filename>
To count all lines use:
$ wc -l file
To filter and count only lines with pattern use:
$ grep -w "pattern" -c file
Or use -v to invert match:
$ grep -w "pattern" -c -v file
See the grep man page to take a look at the -e,-i and -x args...
I know this is old but still: Count filtered lines
My file looks like:
Number of files sent
Company 1 file: foo.pdf OK
Company 1 file: foo.csv OK
Company 1 file: foo.msg OK
Company 2 file: foo.pdf OK
Company 2 file: foo.csv OK
Company 2 file: foo.msg Error
Company 3 file: foo.pdf OK
Company 3 file: foo.csv OK
Company 3 file: foo.msg Error
Company 4 file: foo.pdf OK
Company 4 file: foo.csv OK
Company 4 file: foo.msg Error
If I want to know how many files are sent OK:
grep "OK" <filename> | wc -l
OR
grep -c "OK" filename
wc -l file_name
for eg: wc -l file.txt
it will give you the total number of lines in that file
for getting last line use tail -1 file_name
wc -l <filename>
This will give you number of lines and filename in output.
Eg.
wc -l 24-11-2019-04-33-01-url_creator.log
Output
63 24-11-2019-04-33-01-url_creator.log
Use
wc -l <filename>|cut -d\ -f 1
to get only number of lines in output.
Eg.
wc -l 24-11-2019-04-33-01-url_creator.log|cut -d\ -f 1
Output
63
If all you want is the number of lines (and not the number of lines and the stupid file name coming back):
wc -l < /filepath/filename.ext
As previously mentioned these also work (but are inferior for other reasons):
awk 'END{print NR}' file # not on all unixes
sed -n '$=' file # (GNU sed) also not on all unixes
grep -c ".*" file # overkill and probably also slower
Redirection/Piping the output of the file to wc -l
should suffice, like the following:
cat /etc/fstab | wc -l
which then would provide the no. of lines only.
wc -l <file.txt>
Or
command | wc -l
This drop-in portable shell function [?] works like a charm. Just add the following snippet to your .bashrc
file (or the equivalent for your shell environment).
# ---------------------------------------------
# Count lines in a file
#
# @1 = path to file
#
# EXAMPLE USAGE: `count_file_lines $HISTFILE`
# ---------------------------------------------
count_file_lines() {
local subj=$(wc -l $1)
subj="${subj//$1/}"
echo ${subj//[[:space:]]}
}
This should be fully compatible with all POSIX-compliant shells in addition to bash and zsh.
there are many ways. using wc
is one.
wc -l file
others include
awk 'END{print NR}' file
sed -n '$=' file
(GNU sed)
grep -c ".*" file
I just made a program to do this ( with node
)
npm install gimme-lines
gimme-lines verbose --exclude=node_modules,public,vendor --exclude_extensions=html
count number of lines and store result in variable use this command:
count=$(wc -l < file.txt)
echo "Number of lines: $count"
As others said wc -l
is the best solution, but for future reference you can use Perl:
perl -lne 'END { print $. }'
$.
contains line number and END
block will execute at the end of script.
wc -l file.txt | cut -f3 -d" "
Returns only the number of lines
Or count all lines in subdirectories with a file name pattern (e.g. logfiles with timestamps in the file name):
wc -l ./**/*_SuccessLog.csv
cat file.log | wc -l | grep -oE '\d+'
grep -oE '\d+'
: In order to return the digit numbers ONLY.I saw this question while I was looking for a way to count multiple files lines, so if you want to count multiple file lines of a .txt file you can do this,
cat *.txt | wc -l
it will also run on one .txt file ;)
The tool wc
is the "word counter" in UNIX and UNIX-like operating systems, but you can also use it to count lines in a file by adding the -l
option.
wc -l foo
will count the number of lines in foo
. You can also pipe output from a program like this: ls -l | wc -l
, which will tell you how many files are in the current directory (plus one).
Use nl
like this:
nl filename
From man nl
:
Write each FILE to standard output, with line numbers added. With no FILE, or when FILE is -, read standard input.
I've been using this:
cat myfile.txt | wc -l
I prefer it over the accepted answer because it does not print the filename, and you don't have to use awk
to fix that. Accepted answer:
wc -l myfile.txt
But I think the best one is GGB667's answer:
wc -l < myfile.txt
I will probably be using that from now on. It's slightly shorter than my way. I am putting up my old way of doing it in case anyone prefers it. The output is the same with those two methods.
Above are the preferred method but "cat" command can also helpful:
cat -n <filename>
Will show you whole content of file with line numbers.
If you want to check the total line of all the files in a directory ,you can use find and wc:
find . -type f -exec wc -l {} +
wc -l
does not count lines.Yes, this answer may be a bit late to the party, but I haven't found anyone document a more robust solution in the answers yet.
Contrary to popular belief, POSIX does not require files to end with a newline character at all. Yes, the definition of a POSIX 3.206 Line is as follows:
A sequence of zero or more non- <newline> characters plus a terminating character.
However, what many people are not aware of is that POSIX also defines POSIX 3.195 Incomplete Line as:
A sequence of one or more non- <newline> characters at the end of the file.
Hence, files without a trailing LF
are perfectly POSIX-compliant.
If you choose not to support both EOF types, your program is not POSIX-compliant.
As an example, let's have look at the following file.
1 This is the first line.
2 This is the second line.
No matter the EOF, I'm sure you would agree that there are two lines. You figured that out by looking at how many lines have been started, not by looking at how many lines have been terminated. In other words, as per POSIX, these two files both have the same amount of lines:
1 This is the first line.\n
2 This is the second line.\n
1 This is the first line.\n
2 This is the second line.
The man page is relatively clear about wc
counting newlines, with a newline just being a 0x0a
character:
NAME
wc - print newline, word, and byte counts for each file
Hence, wc
doesn't even attempt to count what you might call a "line". Using wc
to count lines can very well lead to miscounts, depending on the EOF of your input file.
You can use grep
to count lines just as in the example above. This solution is both more robust and precise, and it supports all the different flavors of what a line in your file could be:
$ grep -c ^ FILE
Source: Stackoverflow.com