[bash] How do I count the number of rows and columns in a file using bash?

Say I have a large file with many rows and many columns. I'd like to find out how many rows and columns I have using bash.

This question is related to bash row

The answer is


Alternatively to count columns, count the separators between columns. I find this to be a good balance of brevity and ease to remember. Of course, this won't work if your data include the column separator.

head -n1 myfile.txt | grep -o " " | wc -l

Uses head -n1 to grab the first line of the file. Uses grep -o to to count all the spaces, and output each space found on a new line. Uses wc -l to count the number of lines.


If your file is big but you are certain that the number of columns remains the same for each row (and you have no heading) use:

head -n 1 FILE | awk '{print NF}'

to find the number of columns, where FILE is your file name.

To find the number of lines 'wc -l FILE' will work.


Following code will do the job and will allow you to specify field delimiter. This is especially useful for files containing more than 20k lines.

awk 'BEGIN { 
  FS="|"; 
  min=10000; 
}
{ 
  if( NF > max ) max = NF; 
  if( NF < min ) min = NF;
} 
END { 
  print "Max=" max; 
  print "Min=" min; 
} ' myPipeDelimitedFile.dat

A very simple way to count the columns of the first line in pure bash (no awk, perl, or other languages):

read -r line < $input_file
ncols=`echo $line | wc -w`

This will work if your data are formatted appropriately.


Perl solution:

perl -ane '$maxc = $#F if $#F > $maxc; END{$maxc++; print "max columns: $maxc\nrows: $.\n"}' file

If your input file is comma-separated:

perl -F, -ane '$maxc = $#F if $#F > $maxc; END{$maxc++; print "max columns: $maxc\nrows: $.\n"}' file

output:

max columns: 5
rows: 2

-a autosplits input line to @F array
$#F is the number of columns -1
-F, field separator of , instead of whitespace
$. is the line number (number of rows)


If counting number of columns in the first is enough, try the following:

awk -F'\t' '{print NF; exit}' myBigFile.tsv

where \t is column delimiter.


head -1 file.tsv |head -1 train.tsv |tr '\t' '\n' |wc -l

take the first line, change tabs (or you can use ',' instead of '\t' for commas), count the number of lines.


awk 'BEGIN{FS=","}END{print "COLUMN NO: "NF " ROWS NO: "NR}' file

You can use any delimiter as field separator and can find numbers of ROWS and columns


For rows you can simply use wc -l file

-l stands for total line

for columns uou can simply use head -1 file | tr ";" "\n" | wc -l

Explanation
head -1 file
Grabbing the first line of your file, which should be the headers, and sending to it to the next cmd through the pipe
| tr ";" "\n"

tr stands for translate.
It will translate all ; characters into a newline character.
In this example ; is your delimiter.

Then it sends data to next command.

wc -l
Counts the total number of lines.


Simple row count is $(wc -l "$file"). Use $(wc -lL "$file") to show both the number of lines and the number of characters in the longest line.


You can use bash. Note for very large files in terms of GB, use awk/wc. However it should still be manageable in performance for files with a few MB.

declare -i count=0
while read
do
    ((count++))
done < file    
echo "line count: $count"

Little twist to kirill_igum's answer, and you can easily count the number of columns of any certain row you want, which was why I've come to this question, even though the question is asking for the whole file. (Though if your file has same columns in each line this also still works of course):

head -2 file |tail -1 |tr '\t' '\n' |wc -l

Gives the number of columns of row 2. Replace 2 with 55 for example to get it for row 55.

-bash-4.2$ cat file
1       2       3
1       2       3       4
1       2
1       2       3       4       5

-bash-4.2$ head -1 file |tail -1 |tr '\t' '\n' |wc -l
3
-bash-4.2$ head -4 file |tail -1 |tr '\t' '\n' |wc -l
5

Code above works if your file is separated by tabs, as we define it to "tr". If your file has another separator, say commas, you can still count your "columns" using the same trick by simply changing the separator character "t" to ",":

-bash-4.2$ cat csvfile
1,2,3,4
1,2
1,2,3,4,5
-bash-4.2$ head -2 csvfile |tail -1 |tr '\,' '\n' |wc -l
2