[c++] Why does C++ compilation take so long?

Compiling a C++ file takes a very long time when compared to C# and Java. It takes significantly longer to compile a C++ file than it would to run a normal size Python script. I'm currently using VC++ but it's the same with any compiler. Why is this?

The two reasons I could think of were loading header files and running the preprocessor, but that doesn't seem like it should explain why it takes so long.

This question is related to c++ performance compilation

The answer is


C++ is compiled into machine code. So you have the pre-processor, the compiler, the optimizer, and finally the assembler, all of which have to run.

Java and C# are compiled into byte-code/IL, and the Java virtual machine/.NET Framework execute (or JIT compile into machine code) prior to execution.

Python is an interpreted language that is also compiled into byte-code.

I'm sure there are other reasons for this as well, but in general, not having to compile to native machine language saves time.


The trade off you are getting is that the program runs a wee bit faster. That may be a cold comfort to you during development, but it could matter a great deal once development is complete, and the program is just being run by users.


In large object-oriented projects, the significant reason is that C++ makes it hard to confine dependencies.

Private functions need to be listed in their respective class' public header, which makes dependencies more transitive (contagious) than they need to be:

// Ugly private dependencies
#include <map>
#include <list>
#include <chrono>
#include <stdio.h>
#include <Internal/SecretArea.h>
#include <ThirdParty/GodObjectFactory.h>

class ICantHelpButShowMyPrivatePartsSorry
{
public:
    int facade(int);

private:
    std::map<int, int> implementation_detail_1(std::list<int>);
    std::chrono::years implementation_detail_2(FILE*);
    Intern::SecretArea implementation_detail_3(const GodObjectFactory&);
};

If this pattern is blissfully repeated into dependency trees of headers, this tends to create a few "god headers" that indirectly include large portions of all headers in a project. They are as all-knowing as god objects, except that this is not apparent until you draw their inclusion trees.

This adds to compilation time in 2 ways:

  1. The amount of code they add to each compilation unit (.cpp file) that includes them is easily many times more than the cpp files themselves. To put this in perspective, catch2.hpp is 18000 lines, whereas most people (even IDEs) start to struggle editing files larger than 1000-10000 lines.
  2. The number of files that have to be recompiled when a header is edited is not contained to the true set of files that depend on it.

Yes, there are mitigations, like forward declaration, which has perceived downsides, or the pimpl idiom, which is a nonzero cost abstraction. Even though C++ is limitless in what you can do, your peers will be wondering what you have been smoking if you go too far from how it's meant to be.

The worst part: If you think about it, the need to declare private functions in their public header is not even necessary: The moral equivalent of member functions can be, and commonly is, mimicked in C, which does not recreate this problem.


To answer this question simply, C++ is a much more complex language than other languages available on the market. It has a legacy inclusion model that parses code multiple times, and its templated libraries are not optimized for compilation speed.

Grammar and ADL

Let's have a look at the grammatical complexity of C++ by considering a very simple example:

x*y;

While you’d be likely to say that the above is an expression with multiplication, this is not necessarily the case in C++. If x is a type, then the statement is, in fact, a pointer declaration. This means that C++ grammar is context-sensitive.

Here’s another example:

foo<x> a;

Again, you might think this is a declaration of the variable "a" of type foo, but it could also be interpreted as:

(foo < x) > a;

which would make it a comparison expression.

C++ has a feature called Argument Dependent Lookup (ADL). ADL establishes the rules that govern how the compiler looks up a name. Consider the following example:

namespace A{
  struct Aa{}; 
  void foo(Aa arg);
}
namespace B{
  struct Bb{};
  void foo(A::Aa arg, Bb arg2);
}
namespace C{ 
  struct Cc{}; 
  void foo(A::Aa arg, B::Bb arg2, C::Cc arg3);
}

foo(A::Aa{}, B::Bb{}, C::Cc{});

ADL rules state that we will be looking for the name "foo" considering all arguments of the function call. In this case, all of the functions named “foo” will be considered for overload resolution. This process might take time, especially if there are lots of function overloads. In a templated context, ADL rules become even more complicated.

#include

This command is something that might significantly influence compilation times. Depending on the type of file you include, the preprocessor might copy only a couple of lines of code, or it might copy thousands.

Furthermore, this command cannot be optimized by the compiler. You can copy different pieces of code that can be modified just before inclusion if the header file depends on macros.

There are some solutions to these issues. You can use precompiled headers, which are the compiler's internal representation of what was parsed in the header. This can’t be done without the user's effort, however, because precompiled headers assume that headers are not macro dependent.

The modules feature provides a language-level solution to this problem. It’s available from the C++20 release onward.

Templates

The compilation speed for templates is challenging. Each translation unit that uses templates needs to have them included, and the definitions of these templates need to be available. Some instantiations of templates end up in instantiations of other templates. In some extreme cases, template instantiation can consume lots of resources. A library that uses templates and that was not designed for compilation speed can become troublesome, as you can see in a comparison of metaprogramming libraries provided at this link: http://metaben.ch/. Their differences in compilation speed are significant.

If you want to understand why some metaprogramming libraries are better for compilation times than others, check out this video about the Rule of Chiel.

Conclusion

C++ is a slowly compiled language because compilation performance was not the highest priority when the language was initially developed. As a result, C++ ended up with features that might be effective during runtime, but are not necessarily effective during compile time.

P.S – I work at Incredibuild, a software development acceleration company specializing in accelerating C++ compilations, you are welcome to try it for free.


The trade off you are getting is that the program runs a wee bit faster. That may be a cold comfort to you during development, but it could matter a great deal once development is complete, and the program is just being run by users.


In large object-oriented projects, the significant reason is that C++ makes it hard to confine dependencies.

Private functions need to be listed in their respective class' public header, which makes dependencies more transitive (contagious) than they need to be:

// Ugly private dependencies
#include <map>
#include <list>
#include <chrono>
#include <stdio.h>
#include <Internal/SecretArea.h>
#include <ThirdParty/GodObjectFactory.h>

class ICantHelpButShowMyPrivatePartsSorry
{
public:
    int facade(int);

private:
    std::map<int, int> implementation_detail_1(std::list<int>);
    std::chrono::years implementation_detail_2(FILE*);
    Intern::SecretArea implementation_detail_3(const GodObjectFactory&);
};

If this pattern is blissfully repeated into dependency trees of headers, this tends to create a few "god headers" that indirectly include large portions of all headers in a project. They are as all-knowing as god objects, except that this is not apparent until you draw their inclusion trees.

This adds to compilation time in 2 ways:

  1. The amount of code they add to each compilation unit (.cpp file) that includes them is easily many times more than the cpp files themselves. To put this in perspective, catch2.hpp is 18000 lines, whereas most people (even IDEs) start to struggle editing files larger than 1000-10000 lines.
  2. The number of files that have to be recompiled when a header is edited is not contained to the true set of files that depend on it.

Yes, there are mitigations, like forward declaration, which has perceived downsides, or the pimpl idiom, which is a nonzero cost abstraction. Even though C++ is limitless in what you can do, your peers will be wondering what you have been smoking if you go too far from how it's meant to be.

The worst part: If you think about it, the need to declare private functions in their public header is not even necessary: The moral equivalent of member functions can be, and commonly is, mimicked in C, which does not recreate this problem.


A compiled language is always going to require a bigger initial overhead than an interpreted language. In addition, perhaps you didn't structure your C++ code very well. For example:

#include "BigClass.h"

class SmallClass
{
   BigClass m_bigClass;
}

Compiles a lot slower than:

class BigClass;

class SmallClass
{
   BigClass* m_bigClass;
}

Parsing and code generation are actually rather fast. The real problem is opening and closing files. Remember, even with include guards, the compiler still have open the .H file, and read each line (and then ignore it).

A friend once (while bored at work), took his company's application and put everything -- all source and header files-- into one big file. Compile time dropped from 3 hours to 7 minutes.


An easy way to reduce compilation time in larger C++ projects is to make a *.cpp include file that includes all the cpp files in your project and compile that. This reduces the header explosion problem to once. The advantage of this is that compilation errors will still reference the correct file.

For example, assume you have a.cpp, b.cpp and c.cpp.. create a file: everything.cpp:

#include "a.cpp"
#include "b.cpp"
#include "c.cpp"

Then compile the project by just making everything.cpp


Parsing and code generation are actually rather fast. The real problem is opening and closing files. Remember, even with include guards, the compiler still have open the .H file, and read each line (and then ignore it).

A friend once (while bored at work), took his company's application and put everything -- all source and header files-- into one big file. Compile time dropped from 3 hours to 7 minutes.


Most answers are being a bit unclear in mentioning that C# will always run slower due to the cost of performing actions that in C++ are performed only once at compile time, this performance cost is also impacted due runtime dependencies (more things to load to be able to run), not to mention that C# programs will always have higher memory footprint, all resulting in performance being more closely related to the capability of hardware available. The same is true to other languages that are interpreted or depend on a VM.


Another reason is the use of the C pre-processor for locating declarations. Even with header guards, .h still have to be parsed over and over, every time they're included. Some compilers support pre-compiled headers that can help with this, but they are not always used.

See also: C++ Frequently Questioned Answers


The biggest issues are:

1) The infinite header reparsing. Already mentioned. Mitigations (like #pragma once) usually only work per compilation unit, not per build.

2) The fact that the toolchain is often separated into multiple binaries (make, preprocessor, compiler, assembler, archiver, impdef, linker, and dlltool in extreme cases) that all have to reinitialize and reload all state all the time for each invocation (compiler, assembler) or every couple of files (archiver, linker, and dlltool).

See also this discussion on comp.compilers: http://compilers.iecc.com/comparch/article/03-11-078 specially this one:

http://compilers.iecc.com/comparch/article/02-07-128

Note that John, the moderator of comp.compilers seems to agree, and that this means it should be possible to achieve similar speeds for C too, if one integrates the toolchain fully and implements precompiled headers. Many commercial C compilers do this to some degree.

Note that the Unix model of factoring everything out to a separate binary is a kind of the worst case model for Windows (with its slow process creation). It is very noticable when comparing GCC build times between Windows and *nix, especially if the make/configure system also calls some programs just to obtain information.


Another reason is the use of the C pre-processor for locating declarations. Even with header guards, .h still have to be parsed over and over, every time they're included. Some compilers support pre-compiled headers that can help with this, but they are not always used.

See also: C++ Frequently Questioned Answers


The trade off you are getting is that the program runs a wee bit faster. That may be a cold comfort to you during development, but it could matter a great deal once development is complete, and the program is just being run by users.


Parsing and code generation are actually rather fast. The real problem is opening and closing files. Remember, even with include guards, the compiler still have open the .H file, and read each line (and then ignore it).

A friend once (while bored at work), took his company's application and put everything -- all source and header files-- into one big file. Compile time dropped from 3 hours to 7 minutes.


A compiled language is always going to require a bigger initial overhead than an interpreted language. In addition, perhaps you didn't structure your C++ code very well. For example:

#include "BigClass.h"

class SmallClass
{
   BigClass m_bigClass;
}

Compiles a lot slower than:

class BigClass;

class SmallClass
{
   BigClass* m_bigClass;
}

Some reasons are:

1) C++ grammar is more complex than C# or Java and takes more time to parse.

2) (More important) C++ compiler produces machine code and does all optimizations during compilation. C# and Java go just half way and leave these steps to JIT.


Building C/C++: what really happens and why does it take so long

A relatively large portion of software development time is not spent on writing, running, debugging or even designing code, but waiting for it to finish compiling. In order to make things fast, we first have to understand what is happening when C/C++ software is compiled. The steps are roughly as follows:

  • Configuration
  • Build tool startup
  • Dependency checking
  • Compilation
  • Linking

We will now look at each step in more detail focusing on how they can be made faster.

Configuration

This is the first step when starting to build. Usually means running a configure script or CMake, Gyp, SCons or some other tool. This can take anything from one second to several minutes for very large Autotools-based configure scripts.

This step happens relatively rarely. It only needs to be run when changing configurations or changing the build configuration. Short of changing build systems, there is not much to be done to make this step faster.

Build tool startup

This is what happens when you run make or click on the build icon on an IDE (which is usually an alias for make). The build tool binary starts and reads its configuration files as well as the build configuration, which are usually the same thing.

Depending on build complexity and size, this can take anywhere from a fraction of a second to several seconds. By itself this would not be so bad. Unfortunately most make-based build systems cause make to be invocated tens to hundreds of times for every single build. Usually this is caused by recursive use of make (which is bad).

It should be noted that the reason Make is so slow is not an implementation bug. The syntax of Makefiles has some quirks that make a really fast implementation all but impossible. This problem is even more noticeable when combined with the next step.

Dependency checking

Once the build tool has read its configuration, it has to determine what files have changed and which ones need to be recompiled. The configuration files contain a directed acyclic graph describing the build dependencies. This graph is usually built during the configure step. Build tool startup time and the dependency scanner are run on every single build. Their combined runtime determines the lower bound on the edit-compile-debug cycle. For small projects this time is usually a few seconds or so. This is tolerable. There are alternatives to Make. The fastest of them is Ninja, which was built by Google engineers for Chromium. If you are using CMake or Gyp to build, just switch to their Ninja backends. You don’t have to change anything in the build files themselves, just enjoy the speed boost. Ninja is not packaged on most distributions, though, so you might have to install it yourself.

Compilation

At this point we finally invoke the compiler. Cutting some corners, here are the approximate steps taken.

  • Merging includes
  • Parsing the code
  • Code generation/optimization

Contrary to popular belief, compiling C++ is not actually all that slow. The STL is slow and most build tools used to compile C++ are slow. However there are faster tools and ways to mitigate the slow parts of the language.

Using them takes a bit of elbow grease, but the benefits are undeniable. Faster build times lead to happier developers, more agility and, eventually, better code.


Some reasons are:

1) C++ grammar is more complex than C# or Java and takes more time to parse.

2) (More important) C++ compiler produces machine code and does all optimizations during compilation. C# and Java go just half way and leave these steps to JIT.


C++ is compiled into machine code. So you have the pre-processor, the compiler, the optimizer, and finally the assembler, all of which have to run.

Java and C# are compiled into byte-code/IL, and the Java virtual machine/.NET Framework execute (or JIT compile into machine code) prior to execution.

Python is an interpreted language that is also compiled into byte-code.

I'm sure there are other reasons for this as well, but in general, not having to compile to native machine language saves time.


A compiled language is always going to require a bigger initial overhead than an interpreted language. In addition, perhaps you didn't structure your C++ code very well. For example:

#include "BigClass.h"

class SmallClass
{
   BigClass m_bigClass;
}

Compiles a lot slower than:

class BigClass;

class SmallClass
{
   BigClass* m_bigClass;
}

The trade off you are getting is that the program runs a wee bit faster. That may be a cold comfort to you during development, but it could matter a great deal once development is complete, and the program is just being run by users.


To answer this question simply, C++ is a much more complex language than other languages available on the market. It has a legacy inclusion model that parses code multiple times, and its templated libraries are not optimized for compilation speed.

Grammar and ADL

Let's have a look at the grammatical complexity of C++ by considering a very simple example:

x*y;

While you’d be likely to say that the above is an expression with multiplication, this is not necessarily the case in C++. If x is a type, then the statement is, in fact, a pointer declaration. This means that C++ grammar is context-sensitive.

Here’s another example:

foo<x> a;

Again, you might think this is a declaration of the variable "a" of type foo, but it could also be interpreted as:

(foo < x) > a;

which would make it a comparison expression.

C++ has a feature called Argument Dependent Lookup (ADL). ADL establishes the rules that govern how the compiler looks up a name. Consider the following example:

namespace A{
  struct Aa{}; 
  void foo(Aa arg);
}
namespace B{
  struct Bb{};
  void foo(A::Aa arg, Bb arg2);
}
namespace C{ 
  struct Cc{}; 
  void foo(A::Aa arg, B::Bb arg2, C::Cc arg3);
}

foo(A::Aa{}, B::Bb{}, C::Cc{});

ADL rules state that we will be looking for the name "foo" considering all arguments of the function call. In this case, all of the functions named “foo” will be considered for overload resolution. This process might take time, especially if there are lots of function overloads. In a templated context, ADL rules become even more complicated.

#include

This command is something that might significantly influence compilation times. Depending on the type of file you include, the preprocessor might copy only a couple of lines of code, or it might copy thousands.

Furthermore, this command cannot be optimized by the compiler. You can copy different pieces of code that can be modified just before inclusion if the header file depends on macros.

There are some solutions to these issues. You can use precompiled headers, which are the compiler's internal representation of what was parsed in the header. This can’t be done without the user's effort, however, because precompiled headers assume that headers are not macro dependent.

The modules feature provides a language-level solution to this problem. It’s available from the C++20 release onward.

Templates

The compilation speed for templates is challenging. Each translation unit that uses templates needs to have them included, and the definitions of these templates need to be available. Some instantiations of templates end up in instantiations of other templates. In some extreme cases, template instantiation can consume lots of resources. A library that uses templates and that was not designed for compilation speed can become troublesome, as you can see in a comparison of metaprogramming libraries provided at this link: http://metaben.ch/. Their differences in compilation speed are significant.

If you want to understand why some metaprogramming libraries are better for compilation times than others, check out this video about the Rule of Chiel.

Conclusion

C++ is a slowly compiled language because compilation performance was not the highest priority when the language was initially developed. As a result, C++ ended up with features that might be effective during runtime, but are not necessarily effective during compile time.

P.S – I work at Incredibuild, a software development acceleration company specializing in accelerating C++ compilations, you are welcome to try it for free.


The slowdown is not necessarily the same with any compiler.

I haven't used Delphi or Kylix but back in the MS-DOS days, a Turbo Pascal program would compile almost instantaneously, while the equivalent Turbo C++ program would just crawl.

The two main differences were a very strong module system and a syntax that allowed single-pass compilation.

It's certainly possible that compilation speed just hasn't been a priority for C++ compiler developers, but there are also some inherent complications in the C/C++ syntax that make it more difficult to process. (I'm not an expert on C, but Walter Bright is, and after building various commercial C/C++ compilers, he created the D language. One of his changes was to enforce a context-free grammar to make the language easier to parse.)

Also, you'll notice that generally Makefiles are set up so that every file is compiled separately in C, so if 10 source files all use the same include file, that include file is processed 10 times.


Some reasons are:

1) C++ grammar is more complex than C# or Java and takes more time to parse.

2) (More important) C++ compiler produces machine code and does all optimizations during compilation. C# and Java go just half way and leave these steps to JIT.


Another reason is the use of the C pre-processor for locating declarations. Even with header guards, .h still have to be parsed over and over, every time they're included. Some compilers support pre-compiled headers that can help with this, but they are not always used.

See also: C++ Frequently Questioned Answers


Some reasons are:

1) C++ grammar is more complex than C# or Java and takes more time to parse.

2) (More important) C++ compiler produces machine code and does all optimizations during compilation. C# and Java go just half way and leave these steps to JIT.


The slowdown is not necessarily the same with any compiler.

I haven't used Delphi or Kylix but back in the MS-DOS days, a Turbo Pascal program would compile almost instantaneously, while the equivalent Turbo C++ program would just crawl.

The two main differences were a very strong module system and a syntax that allowed single-pass compilation.

It's certainly possible that compilation speed just hasn't been a priority for C++ compiler developers, but there are also some inherent complications in the C/C++ syntax that make it more difficult to process. (I'm not an expert on C, but Walter Bright is, and after building various commercial C/C++ compilers, he created the D language. One of his changes was to enforce a context-free grammar to make the language easier to parse.)

Also, you'll notice that generally Makefiles are set up so that every file is compiled separately in C, so if 10 source files all use the same include file, that include file is processed 10 times.


A compiled language is always going to require a bigger initial overhead than an interpreted language. In addition, perhaps you didn't structure your C++ code very well. For example:

#include "BigClass.h"

class SmallClass
{
   BigClass m_bigClass;
}

Compiles a lot slower than:

class BigClass;

class SmallClass
{
   BigClass* m_bigClass;
}

There are two issues I can think of that might be affecting the speed at which your programs in C++ are compiling.

POSSIBLE ISSUE #1 - COMPILING THE HEADER: (This may or may not have already been addressed by another answer or comment.) Microsoft Visual C++ (A.K.A. VC++) supports precompiled headers, which I highly recommend. When you create a new project and select the type of program you are making, a setup wizard window should appear on your screen. If you hit the “Next >” button at the bottom of it, the window will take you to a page that has several lists of features; make sure that the box next to the “Precompiled header” option is checked. (NOTE: This has been my experience with Win32 console applications in C++, but this may not be the case with all kinds of programs in C++.)

POSSIBLE ISSUE #2 - THE LOCATION BEING COMPILED TO: This summer, I took a programming course, and we had to store all of our projects on 8GB flash drives, as the computers in the lab we were using got wiped every night at midnight, which would have erased all of our work. If you are compiling to an external storage device for the sake of portability/security/etc., it can take a very long time (even with the precompiled headers that I mentioned above) for your program to compile, especially if it’s a fairly large program. My advice for you in this case would be to create and compile programs on the hard drive of the computer you’re using, and whenever you want/need to stop working on your project(s) for whatever reason, transfer them to your external storage device, and then click the “Safely Remove Hardware and Eject Media” icon, which should appear as a small flash drive behind a little green circle with a white check mark on it, to disconnect it.

I hope this helps you; let me know if it does! :)


Another reason is the use of the C pre-processor for locating declarations. Even with header guards, .h still have to be parsed over and over, every time they're included. Some compilers support pre-compiled headers that can help with this, but they are not always used.

See also: C++ Frequently Questioned Answers


The slowdown is not necessarily the same with any compiler.

I haven't used Delphi or Kylix but back in the MS-DOS days, a Turbo Pascal program would compile almost instantaneously, while the equivalent Turbo C++ program would just crawl.

The two main differences were a very strong module system and a syntax that allowed single-pass compilation.

It's certainly possible that compilation speed just hasn't been a priority for C++ compiler developers, but there are also some inherent complications in the C/C++ syntax that make it more difficult to process. (I'm not an expert on C, but Walter Bright is, and after building various commercial C/C++ compilers, he created the D language. One of his changes was to enforce a context-free grammar to make the language easier to parse.)

Also, you'll notice that generally Makefiles are set up so that every file is compiled separately in C, so if 10 source files all use the same include file, that include file is processed 10 times.


Building C/C++: what really happens and why does it take so long

A relatively large portion of software development time is not spent on writing, running, debugging or even designing code, but waiting for it to finish compiling. In order to make things fast, we first have to understand what is happening when C/C++ software is compiled. The steps are roughly as follows:

  • Configuration
  • Build tool startup
  • Dependency checking
  • Compilation
  • Linking

We will now look at each step in more detail focusing on how they can be made faster.

Configuration

This is the first step when starting to build. Usually means running a configure script or CMake, Gyp, SCons or some other tool. This can take anything from one second to several minutes for very large Autotools-based configure scripts.

This step happens relatively rarely. It only needs to be run when changing configurations or changing the build configuration. Short of changing build systems, there is not much to be done to make this step faster.

Build tool startup

This is what happens when you run make or click on the build icon on an IDE (which is usually an alias for make). The build tool binary starts and reads its configuration files as well as the build configuration, which are usually the same thing.

Depending on build complexity and size, this can take anywhere from a fraction of a second to several seconds. By itself this would not be so bad. Unfortunately most make-based build systems cause make to be invocated tens to hundreds of times for every single build. Usually this is caused by recursive use of make (which is bad).

It should be noted that the reason Make is so slow is not an implementation bug. The syntax of Makefiles has some quirks that make a really fast implementation all but impossible. This problem is even more noticeable when combined with the next step.

Dependency checking

Once the build tool has read its configuration, it has to determine what files have changed and which ones need to be recompiled. The configuration files contain a directed acyclic graph describing the build dependencies. This graph is usually built during the configure step. Build tool startup time and the dependency scanner are run on every single build. Their combined runtime determines the lower bound on the edit-compile-debug cycle. For small projects this time is usually a few seconds or so. This is tolerable. There are alternatives to Make. The fastest of them is Ninja, which was built by Google engineers for Chromium. If you are using CMake or Gyp to build, just switch to their Ninja backends. You don’t have to change anything in the build files themselves, just enjoy the speed boost. Ninja is not packaged on most distributions, though, so you might have to install it yourself.

Compilation

At this point we finally invoke the compiler. Cutting some corners, here are the approximate steps taken.

  • Merging includes
  • Parsing the code
  • Code generation/optimization

Contrary to popular belief, compiling C++ is not actually all that slow. The STL is slow and most build tools used to compile C++ are slow. However there are faster tools and ways to mitigate the slow parts of the language.

Using them takes a bit of elbow grease, but the benefits are undeniable. Faster build times lead to happier developers, more agility and, eventually, better code.


An easy way to reduce compilation time in larger C++ projects is to make a *.cpp include file that includes all the cpp files in your project and compile that. This reduces the header explosion problem to once. The advantage of this is that compilation errors will still reference the correct file.

For example, assume you have a.cpp, b.cpp and c.cpp.. create a file: everything.cpp:

#include "a.cpp"
#include "b.cpp"
#include "c.cpp"

Then compile the project by just making everything.cpp


C++ is compiled into machine code. So you have the pre-processor, the compiler, the optimizer, and finally the assembler, all of which have to run.

Java and C# are compiled into byte-code/IL, and the Java virtual machine/.NET Framework execute (or JIT compile into machine code) prior to execution.

Python is an interpreted language that is also compiled into byte-code.

I'm sure there are other reasons for this as well, but in general, not having to compile to native machine language saves time.


The biggest issues are:

1) The infinite header reparsing. Already mentioned. Mitigations (like #pragma once) usually only work per compilation unit, not per build.

2) The fact that the toolchain is often separated into multiple binaries (make, preprocessor, compiler, assembler, archiver, impdef, linker, and dlltool in extreme cases) that all have to reinitialize and reload all state all the time for each invocation (compiler, assembler) or every couple of files (archiver, linker, and dlltool).

See also this discussion on comp.compilers: http://compilers.iecc.com/comparch/article/03-11-078 specially this one:

http://compilers.iecc.com/comparch/article/02-07-128

Note that John, the moderator of comp.compilers seems to agree, and that this means it should be possible to achieve similar speeds for C too, if one integrates the toolchain fully and implements precompiled headers. Many commercial C compilers do this to some degree.

Note that the Unix model of factoring everything out to a separate binary is a kind of the worst case model for Windows (with its slow process creation). It is very noticable when comparing GCC build times between Windows and *nix, especially if the make/configure system also calls some programs just to obtain information.


There are two issues I can think of that might be affecting the speed at which your programs in C++ are compiling.

POSSIBLE ISSUE #1 - COMPILING THE HEADER: (This may or may not have already been addressed by another answer or comment.) Microsoft Visual C++ (A.K.A. VC++) supports precompiled headers, which I highly recommend. When you create a new project and select the type of program you are making, a setup wizard window should appear on your screen. If you hit the “Next >” button at the bottom of it, the window will take you to a page that has several lists of features; make sure that the box next to the “Precompiled header” option is checked. (NOTE: This has been my experience with Win32 console applications in C++, but this may not be the case with all kinds of programs in C++.)

POSSIBLE ISSUE #2 - THE LOCATION BEING COMPILED TO: This summer, I took a programming course, and we had to store all of our projects on 8GB flash drives, as the computers in the lab we were using got wiped every night at midnight, which would have erased all of our work. If you are compiling to an external storage device for the sake of portability/security/etc., it can take a very long time (even with the precompiled headers that I mentioned above) for your program to compile, especially if it’s a fairly large program. My advice for you in this case would be to create and compile programs on the hard drive of the computer you’re using, and whenever you want/need to stop working on your project(s) for whatever reason, transfer them to your external storage device, and then click the “Safely Remove Hardware and Eject Media” icon, which should appear as a small flash drive behind a little green circle with a white check mark on it, to disconnect it.

I hope this helps you; let me know if it does! :)


C++ is compiled into machine code. So you have the pre-processor, the compiler, the optimizer, and finally the assembler, all of which have to run.

Java and C# are compiled into byte-code/IL, and the Java virtual machine/.NET Framework execute (or JIT compile into machine code) prior to execution.

Python is an interpreted language that is also compiled into byte-code.

I'm sure there are other reasons for this as well, but in general, not having to compile to native machine language saves time.


Parsing and code generation are actually rather fast. The real problem is opening and closing files. Remember, even with include guards, the compiler still have open the .H file, and read each line (and then ignore it).

A friend once (while bored at work), took his company's application and put everything -- all source and header files-- into one big file. Compile time dropped from 3 hours to 7 minutes.


Most answers are being a bit unclear in mentioning that C# will always run slower due to the cost of performing actions that in C++ are performed only once at compile time, this performance cost is also impacted due runtime dependencies (more things to load to be able to run), not to mention that C# programs will always have higher memory footprint, all resulting in performance being more closely related to the capability of hardware available. The same is true to other languages that are interpreted or depend on a VM.


Examples related to c++

Method Call Chaining; returning a pointer vs a reference? How can I tell if an algorithm is efficient? Difference between opening a file in binary vs text How can compare-and-swap be used for a wait-free mutual exclusion for any shared data structure? Install Qt on Ubuntu #include errors detected in vscode Cannot open include file: 'stdio.h' - Visual Studio Community 2017 - C++ Error How to fix the error "Windows SDK version 8.1" was not found? Visual Studio 2017 errors on standard headers How do I check if a Key is pressed on C++

Examples related to performance

Why is 2 * (i * i) faster than 2 * i * i in Java? What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to check if a key exists in Json Object and get its value Why does C++ code for testing the Collatz conjecture run faster than hand-written assembly? Most efficient way to map function over numpy array The most efficient way to remove first N elements in a list? Fastest way to get the first n elements of a List into an Array Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3? pandas loc vs. iloc vs. at vs. iat? Android Recyclerview vs ListView with Viewholder

Examples related to compilation

WARNING: API 'variant.getJavaCompile()' is obsolete and has been replaced with 'variant.getJavaCompileProvider()' How to enable C++17 compiling in Visual Studio? How can I use/create dynamic template to compile dynamic Component with Angular 2.0? Microsoft Visual C++ Compiler for Python 3.4 C compile : collect2: error: ld returned 1 exit status Error:java: invalid source release: 8 in Intellij. What does it mean? Eclipse won't compile/run java file IntelliJ IDEA 13 uses Java 1.5 despite setting to 1.7 OPTION (RECOMPILE) is Always Faster; Why? (.text+0x20): undefined reference to `main' and undefined reference to function