Native programs runs using instructions written for the processor they run on.

Interpreted languages are just that, "interpreted". Some other form of instruction is read, and interpreted, by a runtime, which in turn executes native machine instructions.

Think of it this way. If you can talk in your native language to someone, that would generally work faster than having an interpreter having to translate your language into some other language for the listener to understand.

Note that what I am describing above is for when a language is running in an interpreter. There are interpreters for many languages that there is also native linkers for that build native machine instructions. The speed reduction (however the size of that might be) only applies to the interpreted context.

So, it is slightly incorrect to say that the language is slow, rather it is the context in which it is running that is slow.

C# is not an interpreted language, even though it employs an intermediate language (IL), this is JITted to native instructions before being executed, so it has some of the same speed reduction, but not all of it, but I'd bet that if you built a fully fledged interpreter for C# or C++, it would run slower as well.

And just to be clear, when I say "slow", that is of course a relative term.

Answer from Lasse V. Karlsen on Stack Overflow
🌐
Reddit
reddit.com › r/programminglanguages › can an interpreted language be faster than a compiled language?
r/ProgrammingLanguages on Reddit: Can an interpreted language be faster than a compiled language?
March 5, 2020 -

When I was reading the Common Lisp book, I saw that they compile a regular expression to machine code in run time. I was thinking if there is one way of doing it without real-time compilation and I can't think of any better solution. I was thinking that if real-time compilation can solve one problem, maybe it can solve other problems too.

If there is a better solution for real-time compilation, let me know.

If not, tell me other examples that can be useful.

Top answer
1 of 5
25

Not to nitpick, but as something that might be of interest to some readers who aren't aware of this aspect, and hopefully not annoying to those who consider the following unhelpfully pedantic...

There are those who would (and have) upvoted a StackOverflow answer to the question "Compiled vs. Interpreted Languages": which begins:

“compiled programming language” and “interpreted programming language” aren’t meaningful concepts. Any programming language, and I really mean any, can be interpreted or compiled. Thus, interpretation and compilation are implementation techniques, not attributes of languages.

I think it's at least interesting and perhaps significant that the answer I link to and quote above was written in 2015, and has no downvotes, (but only 20 upvotes), whereas the accepted answer was written and accepted in 2010, has 3 downvotes, (but 450 upvotes), and includes a key "no it isn't ... yes it is ... not it isn't" exchange in the comments below it:

This is actually a false dichotomy. There is nothing intrinsic to a language that makes it compiled our interpreted. It is nothing more than a widely held misconception. Many languages have both implementations and all languages can have either.

it is not a false dichotomy. "programming language" includes both design and implementation. While in an theoretical sense a given language definition can be both compiled and interpreted, in real world practice there are considerable differences in implementation. Nobody has yet solved how to effectively compile certain language constructs, for example - it is an open research problem.

it is. There are benefits to compiling and there are benefits to interpreting. Just because compiler technology is developing to improve on certain languages features don't mean we can say anything about the benefits of compiling a language with that feature. Conflating language and implementation causes us to have false understandings of choosing compilation or interpretation for an implementation. For example your comment "[interpreters] Can be more convenient for dynamic languages"

Similarly, from Wikipedia's High-level programming language page:

Note that languages are not strictly interpreted languages or compiled languages. Rather, implementations of language behavior use interpreting or compiling.

This was presumably written by someone who discusses this in the page's Talk page, along with another, saying:

Interpreters and compilers are programs that process programming languages. Languages are not "interpreted" languages or "compiled" languages. Rather, language implementations use interpretation or compilation. For example, Algol 60 and Fortran have both been interpreted (even though they were more typically compiled). Similarly, Scheme has been compiled (even though it has been interpreted in most popular implementations. Java shows the difficulty of trying to apply these labels to languages rather than to implementations; Java is compiled to bytecode and the bytecode is subsequently executed by either interpretation (in a JVM) or compilation (typically with a just-in-time compiler such as HotSpot, again in a JVM).

[Someone else added] Similarly, C# and Visual Basic.Net are compiled to MSIL then just-in-time compiled to native machine code at the time of execution (this is a different strategy than Java as it incurs longer loading times to get the benefit of faster execution).

If I suggest this isn't pedantry then I'll presumably be labeled pedantic. If I suggest it is pedantry then I'll presumably be popular. So I'll do neither and just wrap by noting that all dichotomies are false in reality, but they're easier to understand and so are often better than reality, except when they're worse, which is when you think you know more than Socrates.

2 of 5
15

YES.

http://lambda-the-ultimate.org/node/5075

And remember: A CPU is totally, 100% an interpreter and this mean, interpreters laugh last in performance!

----

Is important to note that "compiled"/"interpreted" is a AXIS that just say WHEN some decision have been done.

Performance instead, is a quality or result of WICH decisions, trade offs where taked.

I hear a good explanation of this about C++: "C++ NOT give you performance, give CONTROL of performance".

You can find that people something get perplexed when their C/Rust code get more slow than interpreted code. This happen, OFTEN, because:

  • Run in debug mode, without optimizations. Interpreters runtime are normally at release mode and run fast in BOTH workloads.

  • The programmer mistake "I run in METAL, COMPILED LANG, yeah!!" and think that the DEFAULTS of the lang are ALL fast ALL the time. Instead, langs like C/C++/Rust could give you the bare minimal feature that MUST be combined properly to archieve that.

A good example that bite often in rust: Reading a file by lines is not buffered, so is SLOW. You MUST wrap your call with a bufreader to make it fast: https://doc.rust-lang.org/std/io/struct.BufReader.html.

Instead, a interpreter runtime could have selected more ergonomic defaults that fit better for the task, so the programmer not need to know that many details, or chosee wrong knobs or combinations of things that are sub-optimal

  • A interpreter can totally take advantage of the idioms/paradigms is based on to defeat most hand coding efforts.

A pair of very good examples: Array langs (kdb+, APL, J) and SQL. SQL is more well know, and is very common that if you try to do what a RDBMS do (joins, groups, etc) on a big table and try to do it manually in C, the interpreted SQL query planner defeat you, because SQL is based in relational model and other things that allow it to exploit more optimal codepaths that are even faster than normal C you do.

  • A interpreted langs can and do MORE than you think.

For example, you could argue that the above statement is false and come with a simple C snippet of C code is actually faster, but:

  • Forget memory managment and/or

  • Transactions

  • The other ACID stuff

  • The ability of composing (ie: SELECT * FROM (SELECT * FROM (SELECT 1 FROM (...

  • and thousands of other stuff

A short, small snippet is easy for a developer to defeat a interpreter (or also: The compiler optimizations), but come with decent performance across ALL the codebase is another thing.

That is why you need automation, and that is why some automations are neater in interpreters and other in compilers.

Eventually, both kind of langs merge ideas from the other, because interpreters/compilers are face of the same coin and both are good for all of us!

Top answer
1 of 15
85

Native programs runs using instructions written for the processor they run on.

Interpreted languages are just that, "interpreted". Some other form of instruction is read, and interpreted, by a runtime, which in turn executes native machine instructions.

Think of it this way. If you can talk in your native language to someone, that would generally work faster than having an interpreter having to translate your language into some other language for the listener to understand.

Note that what I am describing above is for when a language is running in an interpreter. There are interpreters for many languages that there is also native linkers for that build native machine instructions. The speed reduction (however the size of that might be) only applies to the interpreted context.

So, it is slightly incorrect to say that the language is slow, rather it is the context in which it is running that is slow.

C# is not an interpreted language, even though it employs an intermediate language (IL), this is JITted to native instructions before being executed, so it has some of the same speed reduction, but not all of it, but I'd bet that if you built a fully fledged interpreter for C# or C++, it would run slower as well.

And just to be clear, when I say "slow", that is of course a relative term.

2 of 15
43

All answers seem to miss the real important point here. It's the detail of how "interpreted" code is implemented.

Interpreted script languages are slower because their method, object, and global variable space model are dynamic. In my opinion, this is the real definition of script language, not the fact that it is interpreted. This requires many extra hash-table lookups on each access to a variable or method call. And it's the main reason why they are all terrible at multithreading and using a GIL (Global Interpreter Lock). This lookup is where most of the time is spent. It is a painful random memory lookup, which really hurts when you get an L1/L2 cache-miss.

Google's Javascript Core8 is so fast and targeting almost C speed for a simple optimization: they take the object data model as fixed and create internal code to access it like the data structure of a native compiled program. When a new variable or method is added or removed then the whole compiled code is discarded and compiled again.

The technique is well explained in the Deutsch/Schiffman paper "Efficient Implementation of the Smalltalk-80 System".

The question why PHP, Python and Ruby aren't doing this is pretty simple to answer:
the technique is extremely complicated to implement.

And only Google has the money to pay for JavaScript because a fast browser-based JavaScript interpreter is the fundamental need of their billion-dollar business model.

Discussions

interpreters - Why are commonly compiled languages not interpreted for faster iteration? - Software Engineering Stack Exchange
When using a language where the behavior of a piece of code can only depend upon other code that has already been executed, an interpreter need not examine parts of the source text that are never actually executed. While it may be possible and even practical to design a C or C++ interpreter for use in situations where sandboxing was required, in all other regards a non-optimizing compiler would be faster ... More on softwareengineering.stackexchange.com
🌐 softwareengineering.stackexchange.com
January 18, 2023
compilation - Is compiling code really faster than interpreting code? - Stack Overflow
However, what I know for granted ... code, compiles it to machine code, then executes it. Interpreter takes one statement at a time, translates it to machine code or virtual machine code, then executes it immediately. So we get the output on the fly, during the run-time. Then aren't interpreted languages faster than compiled ... More on stackoverflow.com
🌐 stackoverflow.com
programming languages - Can we make general statements about the performance of interpreted code vs compiled code? - Software Engineering Stack Exchange
The question asks about generalities, ... in general interpreted languages may be slower as a whole, that fact likely has absolutely no bearing on your specific technologies. It's quite possible that your specific interpreted B is faster than your compiled A regardless of the general ... More on softwareengineering.stackexchange.com
🌐 softwareengineering.stackexchange.com
Why are interpreted languages slower than compiled languages?

This gets somewhat tricky with more modern hybrid languages/runtimes, but I'll go the original route because it's easier to explain.

The simple answer is that in an interpreted language everything you do has to run through an extra layer that translates that into what the physical machine is doing every single time it's run. For example, let's do a + b. In a compiled language that would be one instruction on the CPU something like: iadd a b c. Addition takes about one CPU cycle, so you can do roughly 2 billion of these per second. All the type checking, variable locations and such has already been done.

In an interpreted language the parser will deconstruct a + b into looking up a and b in the current scope, making sure they're the right type, looking up the addition function that suits those types adding and storing back. In a naive implementation this could be 100+ instructions, three+ function calls and multiple memory lookups.

Modern interpreted languages are much better than this, because they're commonly semi-compiled. Java and Python convert their raw code to something called "bytecode" that runs on a virtual machine. The virtual machine is sort of an idealized replica of a computer that you can compile your code to. The benefit is that your code is still fully portable, but you don't have to do all the same looking up/verification that a purely interpreted system does. Take a look at how similar Java's bytecode looks to assembly.

Some virtual machines (like Oracle's JVM, LLVM, PyPy, and Chrome's JavaScript interpreter) do something called "just in time" compilation. If the VM notices a piece of bytecode that's running a lot, it compiles it into hardware instructions so it can run full speed. Because of this, these languages can get very close to the same speed as compiled languages.

However, there's one more thing that keeps many of these from achieving full speed. That's garbage collection. Many modern OO languages let you create a new object, but don't require that you explicitly delete it. This makes our lives much better as programmers, but how does the VM know when to delete it? One of the easiest ways to find unused things is by stopping the VM, looking at all the objects, erasing the inaccessible ones, then resuming it. This is very bad for performance and there's a lot of research in this area for making it better.

If I didn't explain something well enough or you want more detail, I'd be happy to expound.

Cheers!

More on reddit.com
🌐 r/AskProgramming
5
5
January 27, 2016
🌐
freeCodeCamp
freecodecamp.org › news › compiled-versus-interpreted-languages
Interpreted vs Compiled Programming Languages: What's the Difference?
January 10, 2020 - Compiled languages are converted directly into machine code that the processor can execute. As a result, they tend to be faster and more efficient to execute than interpreted languages.
Top answer
1 of 10
35

I refute the premise. There are interpreters / REPLs for compiled, static languages, they're just not as much part of the common workflow as with dynamic languages. Though that also depends on the application. For example, scientists at CERN work a lot in C++ in the Root framework, and they also use the Cling interpreter a lot, an approach which combines many of the advantages of a fast compiled language and a slow interpreted one like Python, especially for scientific purposes.

With some other languages it's even more drastic. Haskell is a static, compiled language (in some ways even more static than OO languages), but it is very common to develop Haskell interactively using GHCi, either as a REPL (see the online version) or just as a quick typechecking pass to highlight what needs to be worked on. Once something is ready implemented, it'll then be part of a library that is always compiled, resulting in fast code, and that can then be called in either a fully-compiled program or in another interactive session.

Of course it can also go the other way around: typical interpreted languages like Python, JavaScript and Common Lisp are all possible to compile at least in some senses of the word (either JIT or a subset of the language can be statically compiled). Though in my opinion this approach is way more limited than starting with a strong statically typed programming language and then using it more interactively, it can still be a good option for optimising the bottleneck parts of an interpreted program, and is indeed commonly done.

2 of 10
18

Why isn't a thing to interpret a codebase for quick iterative development instead of generating code for a binary each time?

Many languages, including C and C++ don’t lend themselves to repl style interpreters. Making a one line or even one character change can have widespread impact to the behavior of a program (consider changes to a #define for example). Somewhat ironically, this same sort of avalanche effect of small code changes leading to large program changes also makes incremental compilation very difficult. So languages that take a very long time to compile will tend to be ones that are also troublesome to interpret.

🌐
Medium
medium.com › swlh › compiled-vs-interpreted-code-performance-e1a63299760b
Compiled vs Interpreted Code Performance | by Rodrigo Ramirez | The Startup | Medium
October 21, 2019 - While there are elements of Java I still miss in Go, such as class inheritance, the language syntax is clean and compact, compilation and execution are fast, the generated code compact and its out of the box go-routines based approach to concurrency is really elegant. ... Compiled code runs at least one order of magnitude faster than interpreted code.
🌐
TheServerSide
theserverside.com › answer › Interpreted-vs-compiled-languages-Whats-the-difference
Interpreted vs. compiled languages: What's the difference? | TheServerSide
April 15, 2021 - At a high level, the difference between a compiled and interpreted language is that an interpreted language is compiled into an intermediary form and not machine code. Compiled code can run faster, but, unlike interpreted code in Java, it is ...
🌐
Pier Giorgio Grandinetti
pgrandinetti.github.io › home › why are some programming languages faster?
Why Are Some Programming Languages Faster? | Compilers
Briefly explained, the reason is that Compilers produce so-called “target code”, that’s basically machine code. On the other hand, Interpreters add one more layer between the execution environment for your program, and the Operating System, and that slows things down a bit.
Find elsewhere
🌐
DEV Community
dev.to › gridou › interpreted-vs-compiled-languages-understanding-the-difference-4ak8
Interpreted vs. Compiled Languages: Understanding the Difference - DEV Community
March 13, 2025 - This results in optimized performance since the program does not require further compilation at runtime. ✅ Faster Execution – Since the code is precompiled, execution is significantly faster compared to interpreted languages.
🌐
Quora
quora.com › Why-does-a-compiled-language-typically-execute-faster-than-an-interpreted-language
Why does a compiled language typically execute faster than an interpreted language? - Quora
Answer (1 of 5): I assume by “compiled languages” you mean “those languages which are usually compiled to native code”. The thing is, compilation or interpretation is an implementation detail. People have written C interpreters and Python compilers. Also, most modern interpreters compile the sour...
🌐
Unique-Usman's blog
uniqueusman.hashnode.dev › from-english-to-0s-and-1s-exploring-the-magic-of-programming-languagesinterpreted-vs-compiled-vs-just-in-time-compilation
Interpreted vs. Compiled vs. Just-In-Time Compilation
February 13, 2025 - Performance: Compiled languages generally provide high performance because the code is translated directly into machine code that can be executed by the hardware. This direct translation allows for efficient use of system resources, resulting ...
🌐
Shah256
blog.shah256.dev › til › compiled-vs-interpreted
Why Compiled Languages Are Faster Than Interpreted Ones (And Why We Still Use Both)
Compiled = Fast execution, slower development cycle · Interpreted = Flexible development, slower execution · The choice depends on your priorities: raw performance vs development speed and portability · Modern languages often blur these lines with hybrid approaches
Top answer
1 of 4
4

This gets somewhat tricky with more modern hybrid languages/runtimes, but I'll go the original route because it's easier to explain.

The simple answer is that in an interpreted language everything you do has to run through an extra layer that translates that into what the physical machine is doing every single time it's run. For example, let's do a + b. In a compiled language that would be one instruction on the CPU something like: iadd a b c. Addition takes about one CPU cycle, so you can do roughly 2 billion of these per second. All the type checking, variable locations and such has already been done.

In an interpreted language the parser will deconstruct a + b into looking up a and b in the current scope, making sure they're the right type, looking up the addition function that suits those types adding and storing back. In a naive implementation this could be 100+ instructions, three+ function calls and multiple memory lookups.

Modern interpreted languages are much better than this, because they're commonly semi-compiled. Java and Python convert their raw code to something called "bytecode" that runs on a virtual machine. The virtual machine is sort of an idealized replica of a computer that you can compile your code to. The benefit is that your code is still fully portable, but you don't have to do all the same looking up/verification that a purely interpreted system does. Take a look at how similar Java's bytecode looks to assembly.

Some virtual machines (like Oracle's JVM, LLVM, PyPy, and Chrome's JavaScript interpreter) do something called "just in time" compilation. If the VM notices a piece of bytecode that's running a lot, it compiles it into hardware instructions so it can run full speed. Because of this, these languages can get very close to the same speed as compiled languages.

However, there's one more thing that keeps many of these from achieving full speed. That's garbage collection. Many modern OO languages let you create a new object, but don't require that you explicitly delete it. This makes our lives much better as programmers, but how does the VM know when to delete it? One of the easiest ways to find unused things is by stopping the VM, looking at all the objects, erasing the inaccessible ones, then resuming it. This is very bad for performance and there's a lot of research in this area for making it better.

If I didn't explain something well enough or you want more detail, I'd be happy to expound.

Cheers!

2 of 4
1

In general, the more you know about your input in advance, the faster your algorithm can be. This applies to code execution too, since it's very inefficient to check every line as it's being executed.

Compiled, statically typed languages do a first pass over your code during compilation that lets the subsequent execution environment make a lot of assumptions that are basically shortcuts and optimizations.

🌐
AlgoCademy
algocademy.com › blog › understanding-the-difference-between-compiled-and-interpreted-languages
Understanding the Difference Between Compiled and Interpreted Languages – AlgoCademy Blog
Faster Development Cycle: Changes can be made and tested quickly without a separate compilation step. Slower Execution: Generally, interpreted languages run slower than compiled languages due to the on-the-fly translation.
🌐
Sololearn
sololearn.com › en › Discuss › 2053754 › are-interpreted-languages-generally-faster-than-their-compiled-counterparts
Are interpreted languages generally faster than their compiled counterparts? | Sololearn: Learn to code for FREE!
When I run a code written in Python(interpreted), it executes faster than a similar(same purpose) code run in Java or C (compiled). languageprogrammingexecutionspeedcompiledinterpreted ... An interpreted language processes your source code from scratch every time you run the program.
Top answer
1 of 7
105

In programming language design and implementation, there is a large number of choices that can affect performance. I'll only mention a few.

Every language ultimately has to be run by executing machine code. A "compiled" language such as C++ is parsed, decoded, and translated to machine code only once, at compile-time. An "interpreted" language, if implemented in a direct way, is decoded at runtime, at every step, every time. That is, every time we run a statement, the intepreter has to check whether that is an if-then-else, or an assignment, etc. and act accordingly. This means that if we loop 100 times, we decode the same code 100 times, wasting time. Fortunately, interpreters often optimize this through e.g. a just-in-time compiling system. (More correctly, there's no such a thing as a "compiled" or "interpreted" language -- it is a property of the implementation, not of the language. Still, each language often has one widespread implementation, only.)

Different compilers/interpreters perform different optimizations.

If the language has automatic memory management, its implementation has to perform garbage collection. This has a runtime cost, but relieves the programmer from an error-prone task.

A language might be closer to the machine, allowing the expert programmer to micro-optimize everything and squeeze more performance out of the CPU. However, it is arguable if this is actually beneficial in practice, since most programmers do not really micro-optimize, and often a good higher level language can be optimized by the compiler better than what the average programmer would do. (However, sometimes being farther from the machine might have its benefits too! For instance, Haskell is extremely high level, but thanks to its design choices is able to feature very lightweight green threads.)

Static type checking can also help in optimization. In a dynamically typed, interpreted language, every time one computes x - y, the interpreter often has to check whether both x,y are numbers and (e.g.) raise an exception otherwise. This check can be skipped if types were already checked at compile time.

Some languages always report runtime errors in a sane way. If you write a[100] in Java where a has only 20 elements, you get a runtime exception. This requires a runtime check, but provides a much nicer semantics to the programmer than in C, where that would cause undefined behavior, meaning that the program might crash, overwrite some random data in memory, or even perform absolutely anything else (the ISO C standard poses no limits whatsoever).

However, keep in mind that, when evaluating a language, performance is not everything. Don't be obsessed about it. It is a common trap to try to micro-optimize everything, and yet fail to spot that an inefficient algorithm/data structure is being used. Knuth once said "premature optimization is the root of all evil".

Don't underestimate how hard it is to write a program right. Often, it can be better to choose a "slower" language which has a more human-friendly semantics. Further, if there are some specific performance critical parts, those can always be implemented in another language. Just as a reference, in the 2016 ICFP programming contest, these were the languages used by the winners:

1   700327  Unagi                       Java,C++,C#,PHP,Haskell
2   268752  天羽々斬                     C++, Ruby, Python, Haskell, Java, JavaScript
3   243456  Cult of the Bound Variable  C++, Standard ML, Python

None of them used a single language.

2 of 7
20

What governs the "speed" of a programming language?

There is no such thing as the "speed" of a programming language. There is only the speed of a particular program written by a particular progammer executed by a particular version of a particular implementation of a particular execution engine running within a particular environment.

There can be huge performance differences in running the same code written in the same language on the same machine using different implementations. Or even using different versions of the same implementation. For example, running the exact same ECMAScript benchmark on the exact same machine using a version of SpiderMonkey from 10 years ago vs a version from this year will probably yield a performance increase anywhere between 2×–5×, maybe even 10×. Does that then mean that ECMAScript is 2× faster than ECMAScript, because running the same program on the same machine is 2× faster with the newer implementation? That doesn't make sense.

Has this anything to do with memory management?

Not really.

Why does this happen?

Resources. Money. Microsoft probably employs more people making coffee for their compiler programmers than the entire PHP, Ruby, and Python community combined has people working on their VMs.

For more or less any feature of a programming language that impacts performance in some way, there is also a solution. For example, C (I'm using C here as a stand-in for a class of similar languages, some of which even existed before C) is not memory-safe, so that multiple C programs running at the same time can trample on each other's memory. So, we invent virtual memory, and make all C programs go through a layer of indirection so that they can pretend they are the only ones running on the machine. However, that is slow, and so, we invent the MMU, and implement virtual memory in hardware to speed it up.

But! Memory-safe languages don't need all that! Having virtual memory doesn't help them one bit. Actually, it's worse: not only does virtual memory not help memory-safe languages, virtual memory, even when implemented in hardware, still impacts performance. It can be especially harmful to the performance of garbage collectors (which is what a significant number of implementations of memory-safe languages use).

Another example: modern mainstream general purpose CPUs employ sophisticated tricks to reduce the frequency of cache misses. A lot of those tricks amount to trying to predict what code is going to be executed and what memory is going to be needed in the future. However, for languages with a high degree of runtime polymorphism (e.g. OO languages) it is really, really hard to predict those access patterns.

But, there is another way: the total cost of cache misses is the number of cache misses multiplied by the cost of an individual cache miss. Mainstream CPUs try to reduce the number of misses, but what if you could reduce the cost of an individual miss?

The Azul Vega-3 CPU was specifically designed for running virtualized JVMs, and it had a very powerful MMU with some specialized instructions for helping garbage collection and escape detection (the dynamic equivalent to static escape analysis) and powerful memory controllers, and the entire system could still make progress with over 20000 outstanding cache misses in flight. Unfortunately, like most language-specific CPUs, its design was simply out-spent and out-brute-forced by the "giants" Intel, AMD, IBM, and the likes.

The CPU architecture is just one example that has an impact on how easy or how hard it is to have a high-performance implementation of a language. A language like C, C++, D, Rust that is a good fit for the modern mainstream CPU programming model will be easier to make fast than a language that has to "fight" and circumvent the CPU, like Java, ECMAScript, Python, Ruby, PHP.

Really, it's all a question of money. If you spend equal amounts of money to develop a high-performance algorithm in ECMAScript, a high-performance implementation of ECMAScript, a high-performance operating system designed for ECMAScript, a high-performance CPU designed for ECMAScript as has been spent over the last decades to make C-like languages go fast, then you will likely see equal performance. It's just that, at this time, much more money has been spent making C-like languages fast than making ECMAScript-like languages fast, and the assumptions of C-like languages are baked into the entire stack from MMUs and CPUs to operating systems and virtual memory systems up to libraries and frameworks.

Personally, I am most familiar with Ruby (which is generally considered to be a "slow language"), so I will give two examples: the Hash class (one of the central data structures in Ruby, a key-value dictionary) in the Rubinius Ruby implementation is written in 100% pure Ruby, and it has about the same performance as the Hash class in YARV (the most widely-used implementation), which is written in C. And there is an image manipulation library written as a C extension for YARV, that also has a (slow) pure Ruby "fallback version" for implementations that don't support C which uses a ton of highly-dynamic and reflective Ruby tricks; an experimental branch of JRuby, utilizing the Truffle AST interpreter framework and Graal JIT compilation framework by Oracle Labs, can execute that pure Ruby "fallback version" as fast as the YARV can execute the original highly-optimized C version. This is simply (well, anything but) achieved by some really clever people doing really clever stuff with dynamic runtime optimizations, JIT compilation, and partial evaluation.

🌐
AI-FutureSchool
ai-futureschool.com › en › programming › compiled-vs-interpreted-languages-speed.php
Understanding the Speed Difference: Compiled vs Interpreted Languages
February 13, 2026 - This process eliminates the overhead of interpretation at runtime, resulting in faster execution speeds. In contrast, interpreted languages like Python require an interpreter to read and execute the code line by line, which introduces significant ...
🌐
Medium
medium.com › @scott.denton.1989 › compiled-vs-interpreted-programming-languages-bd8fabf158cb
Compiled vs Interpreted Programming Languages | by Scott Denton | Medium
June 5, 2019 - The advantage of using a compiled language is that they tend to be much faster at run time than an interpreted language and can be optimized for certain hardware.
🌐
Reddit
reddit.com › r/computerscience › [question] why do people say compilers are faster than interpreters?
r/computerscience on Reddit: [Question] Why do people say compilers are faster than interpreters?
May 3, 2022 -

I'm working on a project where I personally am using Python, and some of my peers (doing a pretty separate part of the project) are using C++. I was watching James Gosling (creator of Java) talk with Lex Fridman and he said that interpreters are easier but slower than compilers: what does he mean by this? Somewhat naively, I know that when I run my large Python scripts they are essentially instantaneous, with the exception whatever the run time of the program is. When my peers run their C++ code of similar length, it takes an ungodly amount of time just to compile, and then they can run it (and maybe eek out better runtime performance). What am I missing about compilers and interpreters? I don't know very much about interpreters for what it's worth.

Top answer
1 of 6
23
I think the question you are asking might be a bit different than what you are after. People don't really argue that compilers themselves are faster than interpreters, honestly the opposite is generally true. The argument you generally hear however is that programs that are compiled are "faster" than interpreted programs. There is a pretty clear trade off between an interpreted language like Python and a compiled language like C. Python doesn't require compilation but this is at the cost of having to do similar compiler-like tasks at run time, whereas in C, the compiler generates the machine code for the program all up front. If you had a long running program that needed to run as fast as possible, it would make more sense to use C so that there is less overhead during runtime. This isn't to say C is always the better option however, but it really depends on what you're trying to accomplish. If you have completely finished writing your code and it still runs instantaneously then it doesn't matter much that you wrote it in Python. But, if you envision the finished program being particularly computationally intensive or memory intensive, then it might be better to just start with C. When you say your peers might eek out better performance, this is probably true. If you have a program that finishes in 200ms when written in Python but 100ms when written in C, both are going to seem pretty much instantaneous, but if the programs became more intensive and took a lot longer, a C program that takes 1 day instead of 2 day Python program would absolutely be worth the few minutes it might take to compile.
2 of 6
5
Lots of good answers here, u/Jorengarenar summarized pretty well. I will add a quick and dirty practical example here. I'll compare the python script import sys n = int(sys.argv[1]) print(sum(1 for i in range(n) for _ in range(n))) with the C program #include #include int main(int arc, char* argv[]){ int n = atoi(argv[1]); unsigned long acc = 0; for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) acc++; printf("%lu\n", acc); return 0; } compiled with gcc -O0 main.c -o main, the -O0 flag tells the compiler to not optimize this code. Both programs expect a integer n as a command line argument and the perform O(n^2) addition operations. The results are C for i in 100 1000 10000 100000; do time ./main $i; done 10000 ./main $i 0,00s user 0,00s system 77% cpu 0,003 total 1000000 ./main $i 0,01s user 0,00s system 88% cpu 0,010 total 100000000 ./main $i 0,14s user 0,00s system 96% cpu 0,148 total 10000000000 ./main $i 11,40s user 0,01s system 99% cpu 11,445 total Python for i in 100 1000 10000 100000; do time python3 main.py $i; done 10000 python3 main.py $i 0,04s user 0,01s system 95% cpu 0,053 total 1000000 python3 main.py $i 0,04s user 0,01s system 97% cpu 0,045 total 100000000 python3 main.py $i 2,76s user 0,01s system 99% cpu 2,784 total 10000000000 python3 main.py $i 281,08s user 0,24s system 99% cpu 4:42,44 total The C program is about 20 times faster than the python script. You may think this isn't much, but it could mean the difference between running a python script for 20 days or a C program for one day, or handling 20000 requests or just 1000 requests on a server on the same amount of time. You could say "python has numpy and other libraries to help with that" and that is what is great about python. You probably heard this before python is a great glue language .