[decimal] Calculating bits required to store decimal number

This is a homework question that I am stuck with.

Consider unsigned integer representation. How many bits will be required to store a decimal number containing:

i) 3 digits ii) 4 digits iii) 6 digits iv) n digits

I know that the range of the unsigned integer will be 0 to 2^n but I don't get how the number of bits required to represent a number depends upon it. Please help me out.

Thanks in advance.

This question is related to decimal unsigned unsigned-integer

The answer is


Keep dividing the number by 2 until you get a quotient of 0.


The largest number that can be represented by an n digit number in base b is bn - 1. Hence, the largest number that can be represented in N binary digits is 2N - 1. We need the smallest integer N such that:

2N - 1 = bn - 1
? 2N = bn

Taking the base 2 logarithm of both sides of the last expression gives:

log2 2N = log2 bn
? N = log2 bn
? N = log bn / log 2

Since we want the smallest integer N that satisfies the last relation, to find N, find log bn / log 2 and take the ceiling.

In the last expression, any base is fine for the logarithms, so long as both bases are the same. It is convenient here, since we are interested in the case where b = 10, to use base 10 logarithms taking advantage of log1010n == n.

For n = 3:

N = ?3 / log10 2? = 10

For n = 4:

N = ?4 / log10 2? = 14

For n = 6:

N = ?6 / log10 2? = 20

And in general, for n decimal digits:

N = ?n / log10 2?


There are a lot of answers here, but I'll add my approach since I found this post while working on the same problem.

Starting with what we know here are the number from 0 to 16.

Number           encoded in bits         minimum number of bits to encode
0                000000                  1
1                000001                  1
2                000010                  2
3                000011                  2
4                000100                  3
5                000101                  3
6                000110                  3
7                000111                  3
8                001000                  4
9                001001                  4
10               001010                  4
11               001011                  4
12               001100                  4
13               001101                  4
14               001110                  4
15               001111                  4
16               010000                  5

looking at the breaks, it shows this table

number <=       number of bits
1               0
3               2
7               3
15              4

So, now how do we compute the pattern?

Remember that log base 2 (n) = log base 10 (n) / log base 10 (2)

number    logb10 (n)   logb2 (n)   ceil[logb2(n)] 
1         0            0           0           (special case)
3         0.477        1.58        2
7         0.845        2.807       3  
8         0.903        3           3           (special case)
15        1.176        3.91        4
16        1.204        4           4           (special case)
31        1.491        4.95        5
63        1.799        5.98        6

Now the desired result matching the first table. Notice how also some values are special cases. 0 and any number which is a powers of 2. These values dont change when you apply ceiling so you know you need to add 1 to get the minimum bit field length.

To account for the special cases add one to the input. The resulting code implemented in python is:

from math import log
from math import ceil
def min_num_bits_to_encode_number(a_number):
    a_number=a_number+1  # adjust by 1 for special cases

    # log of zero is undefined
    if 0==a_number:
        return 0
    num_bits = int(ceil(log(a_number,2)))  # logbase2 is available
    return (num_bits)

Ok to generalize the technique of how many bits you need to represent a number is done this way. You have R symbols for a representation and you want to know how many bits, solve this equation R=2^n or log2(R)=n. Where n is the numbers of bits and R is the number of symbols for the representation.

For the decimal number system R=9 so we solve 9=2^n, the answer is 3.17 bits per decimal digit. Thus a 3 digit number will need 9.51 bits or 10. A 1000 digit number needs 3170 bits


Assuming that the question is asking what's the minimum bits required for you to store

  1. 3 digits number

My approach to this question would be:

  • what's the maximum number of 3 digits number we need to store? Ans: 999
  • what's the minimum amount of bits required for me to store this number?

This problem can be solved this way by dividing 999 by 2 recursively. However, it's simpler to use the power of maths to help us. Essentially, we're solving n for the equation below:

2^n = 999
nlog2 = log999
n ~ 10

You'll need 10 bits to store 3 digit number.

Use similar approach to solve the other subquestions!

Hope this helps!


The formula for the number of binary bits required to store n integers (for example, 0 to n - 1) is:

loge(n) / loge(2)

and round up.

For example, for values -128 to 127 (signed byte) or 0 to 255 (unsigned byte), the number of integers is 256, so n is 256, giving 8 from the above formula.

For 0 to n, use n + 1 in the above formula (there are n + 1 integers).

On your calculator, loge may just be labelled log or ln (natural logarithm).


The simplest answer would be to convert the required values to binary, and see how many bits are required for that value. However, the question asks how many bits for a decimal number of X digits. In this case, it seems like you have to choose the highest value with X digits, and then convert that number to binary.

As a basic example, Let's assume we wanted to store a 1 digit base ten number, and wanted to know how many bits that would require. The largest 1 digit base ten number is 9, so we need to convert it to binary. This yields 1001, which has a total of 4 bits. This same example can be applied to a two digit number (with the max value being 99, which converts to 1100011). To solve for n digits, you probably need to solve the others and search for a pattern.

To convert values to binary, you repeatedly divide by two until you get a quotient of 0 (and all of your remainders will be 0 or 1). You then reverse the orders of your remainders to get the number in binary.

Exampe: 13 to binary.

  • 13/2 = 6 r 1
  • 6/2 = 3 r 0
  • 3/2 = 1 r 1
  • 1/2 = 0 r 1
  • = 1101 ((8*1) + (4*1) + (2*0) + (1*1))

Hope this helps out.


Well, you just have to calculate the range for each case and find the lowest power of 2 that is higher than that range.

For instance, in i), 3 decimal digits -> 10^3 = 1000 possible numbers so you have to find the lowest power of 2 that is higher than 1000, which in this case is 2^10 = 1024 (10 bits).

Edit: Basically you need to find the number of possible numbers with the number of digits you have and then find which number of digits (in the other base, in this case base 2, binary) has at least the same possible numbers as the one in decimal.

To calculate the number of possibilities given the number of digits: possibilities=base^ndigits

So, if you have 3 digits in decimal (base 10) you have 10^3=1000 possibilities. Then you have to find a number of digits in binary (bits, base 2) so that the number of possibilities is at least 1000, which in this case is 2^10=1024 (9 digits isn't enough because 2^9=512 which is less than 1000).

If you generalize this, you have: 2^nbits=possibilities <=> nbits=log2(possibilities)

Which applied to i) gives: log2(1000)=9.97 and since the number of bits has to be an integer, you have to round it up to 10.


For a binary number of n digits the maximum decimal value it can hold will be

2^n - 1, and 2^n is the total permutations that can be generated using these many digits.

Taking a case where you only want three digits, ie your case 1. We see that the requirements is,

2^n - 1 >= 999

Applying log to both sides,

log(2^n - 1) >= log(999)

log(2^n) - log(1) >= log(999)

n = 9.964 (approx).

Taking the ceil value of n since 9.964 can't be a valid number of digits, we get n = 10.


This one works!

floor(loge(n) / loge(2)) + 1

To include negative numbers, you can add an extra bit to specify the sign.

floor(loge(abs(n)) / loge(2)) + 2

The short answer is:

int nBits = ceil(log2(N));

That's simply because pow(2, nBits) is slightly bigger than N.


let its required n bit then 2^n=(base)^digit and then take log and count no. for n