Native programs runs using instructions written for the processor they run on.

Interpreted languages are just that, "interpreted". Some other form of instruction is read, and interpreted, by a runtime, which in turn executes native machine instructions.

Think of it this way. If you can talk in your native language to someone, that would generally work faster than having an interpreter having to translate your language into some other language for the listener to understand.

Note that what I am describing above is for when a language is running in an interpreter. There are interpreters for many languages that there is also native linkers for that build native machine instructions. The speed reduction (however the size of that might be) only applies to the interpreted context.

So, it is slightly incorrect to say that the language is slow, rather it is the context in which it is running that is slow.

C# is not an interpreted language, even though it employs an intermediate language (IL), this is JITted to native instructions before being executed, so it has some of the same speed reduction, but not all of it, but I'd bet that if you built a fully fledged interpreter for C# or C++, it would run slower as well.

And just to be clear, when I say "slow", that is of course a relative term.

Answer from Lasse V. Karlsen on Stack Overflow
Top answer
1 of 15
85

Native programs runs using instructions written for the processor they run on.

Interpreted languages are just that, "interpreted". Some other form of instruction is read, and interpreted, by a runtime, which in turn executes native machine instructions.

Think of it this way. If you can talk in your native language to someone, that would generally work faster than having an interpreter having to translate your language into some other language for the listener to understand.

Note that what I am describing above is for when a language is running in an interpreter. There are interpreters for many languages that there is also native linkers for that build native machine instructions. The speed reduction (however the size of that might be) only applies to the interpreted context.

So, it is slightly incorrect to say that the language is slow, rather it is the context in which it is running that is slow.

C# is not an interpreted language, even though it employs an intermediate language (IL), this is JITted to native instructions before being executed, so it has some of the same speed reduction, but not all of it, but I'd bet that if you built a fully fledged interpreter for C# or C++, it would run slower as well.

And just to be clear, when I say "slow", that is of course a relative term.

2 of 15
43

All answers seem to miss the real important point here. It's the detail of how "interpreted" code is implemented.

Interpreted script languages are slower because their method, object, and global variable space model are dynamic. In my opinion, this is the real definition of script language, not the fact that it is interpreted. This requires many extra hash-table lookups on each access to a variable or method call. And it's the main reason why they are all terrible at multithreading and using a GIL (Global Interpreter Lock). This lookup is where most of the time is spent. It is a painful random memory lookup, which really hurts when you get an L1/L2 cache-miss.

Google's Javascript Core8 is so fast and targeting almost C speed for a simple optimization: they take the object data model as fixed and create internal code to access it like the data structure of a native compiled program. When a new variable or method is added or removed then the whole compiled code is discarded and compiled again.

The technique is well explained in the Deutsch/Schiffman paper "Efficient Implementation of the Smalltalk-80 System".

The question why PHP, Python and Ruby aren't doing this is pretty simple to answer:
the technique is extremely complicated to implement.

And only Google has the money to pay for JavaScript because a fast browser-based JavaScript interpreter is the fundamental need of their billion-dollar business model.

🌐
Quora
quora.com › Why-are-interpreted-languages-slow
Why are interpreted languages slow? - Quora
In general, slower than compiled languages, yes. That’s because the processor can only execute machine code. A compiler translates non-machine code into machine code once before you execute it.
Top answer
1 of 4
4

This gets somewhat tricky with more modern hybrid languages/runtimes, but I'll go the original route because it's easier to explain.

The simple answer is that in an interpreted language everything you do has to run through an extra layer that translates that into what the physical machine is doing every single time it's run. For example, let's do a + b. In a compiled language that would be one instruction on the CPU something like: iadd a b c. Addition takes about one CPU cycle, so you can do roughly 2 billion of these per second. All the type checking, variable locations and such has already been done.

In an interpreted language the parser will deconstruct a + b into looking up a and b in the current scope, making sure they're the right type, looking up the addition function that suits those types adding and storing back. In a naive implementation this could be 100+ instructions, three+ function calls and multiple memory lookups.

Modern interpreted languages are much better than this, because they're commonly semi-compiled. Java and Python convert their raw code to something called "bytecode" that runs on a virtual machine. The virtual machine is sort of an idealized replica of a computer that you can compile your code to. The benefit is that your code is still fully portable, but you don't have to do all the same looking up/verification that a purely interpreted system does. Take a look at how similar Java's bytecode looks to assembly.

Some virtual machines (like Oracle's JVM, LLVM, PyPy, and Chrome's JavaScript interpreter) do something called "just in time" compilation. If the VM notices a piece of bytecode that's running a lot, it compiles it into hardware instructions so it can run full speed. Because of this, these languages can get very close to the same speed as compiled languages.

However, there's one more thing that keeps many of these from achieving full speed. That's garbage collection. Many modern OO languages let you create a new object, but don't require that you explicitly delete it. This makes our lives much better as programmers, but how does the VM know when to delete it? One of the easiest ways to find unused things is by stopping the VM, looking at all the objects, erasing the inaccessible ones, then resuming it. This is very bad for performance and there's a lot of research in this area for making it better.

If I didn't explain something well enough or you want more detail, I'd be happy to expound.

Cheers!

2 of 4
1

In general, the more you know about your input in advance, the faster your algorithm can be. This applies to code execution too, since it's very inefficient to check every line as it's being executed.

Compiled, statically typed languages do a first pass over your code during compilation that lets the subsequent execution environment make a lot of assumptions that are basically shortcuts and optimizations.

🌐
Reddit
reddit.com › r/programminglanguages › why are interpreters slower than compiled code?
r/ProgrammingLanguages on Reddit: Why are Interpreters slower than compiled code?
December 6, 2025 -

What a stupid question, of course interpreters are slower than compiled code because native code is faster! Now, hear me out. This might be common sense, and I'm not questioning that at all, compiled languages do run faster, but I want to know the fundamental reasons why this is the case, not just "It's faster because it's faster" or anything like that. Down in the deepest guts of interpreters and code that has been fully compiled, at the most fundamental level, where code runs on processors as a bunch of 1s and 0s (Ok, maybe not so low level, let's go down until assembly) what about them actually creates the speed difference?

I've researched about this extensively, or at least tried to, but all I'm getting are variations of "Interpreters translate program code into machine code line by line at runtime while compilers translate the entire program to machine code once before execution rather than translating it into machine code every time that line is run at runtime, so they're faster" but I know for a fact that this ridiculous statement is hopelessly wrong.

For one, interpreters absolutely do not translate the program into native code, that wouldn't be an interpreter at all, that would be a Just-in Time compiler. The interpreter itself is compiled to machine code, yes (Well, if your interpreter is written in a compiled language that is), but it doesn't turn the program it runs into machine code, it runs it directly.

Secondly, I'm not some god tier .NET C# VM hacker from Microsoft or one of the geniuses behind V8 from Google or anything like that, nor have I made an interpreter before, but I know enough about interpreter theory to know that they 100% do not run code line by line whatsoever. Lexical analysis as well as parsing is almost always done in one shot on the entire program, which at the very least becomes an AST. The only interpreters that actually run code line by line belong to a type of interpreter design known as a syntax directed interpreter, in which there is no program representation and the parser executes code as soon as it parses it. The old wikipedia page on interpreters described this as:

An interpreter generally uses one of the following strategies for program execution:

1. Parse the source code and perform its behavior directly;

2. Translate source code into some efficient intermediate representation and immediately execute this;

3. Explicitly execute stored precompiled code[1] made by a compiler which is part of the interpreter system (Which is often combined with a Just-in-Time Compiler).

A syntax directed interpreter would be the first one. But virtually no interpreter is designed this way today except in rare cases where people want to work with a handicap, actually even 20 years ago you'd be hard pressed to find an interpreter of this design too, and for good reason: Executing code this way is utter insanity. The performance would be so comically bad that even something simple like adding many numbers together would probably take forever, and how would you even handle things like functions which aren't run immediately, and control flow?? Following this logic, this can't be the reason why interpreters are slower.

I also see a less common explanation given, which is that interpreters don't optimize but compilers do. But I don't buy that this is the only reason. I've read a few posts from the V8 team, where they mention that one of V8's compilers, Sparkplug, doesn't optimize at all, yet even its completely unoptimized machine code is so much faster than its interpreter.

So, if all this can't be the reason why interpreters are slower, then what is?

Top answer
1 of 27
108
Interpreter adds indirection to program execution Compilers of static languages have time of the universe to apply optimizations
2 of 27
23
Take a line of code like this: a := b + c # say a b c have i64 type This can be compiled, even with a naive compiler, into about 4 machine instructions. A semi-decent one can do it in one instruction (when a b c reside in registers for exampe). But the same line, if interpreted, might generate 4 bytecode instructions. CPUs don't run bytecode, you need an additional program which executes that program which is data. They might involve dozens of machine instructions depending on how it works: dispatching to the handler for each instruction, fetching the operands, doing the work then storing the results in each case. That's reason number 1 why interpreting is slower. Reason number 2 is that interpreted languages tend to be dynamically typed. That means that even once you've dispatched to the handler for the ADD instruction of my example, it then needs to dispatch, at runtime, on the types of b and c. That makes it even slower. This has absolutely nothing to do with lexing or parsing. Generating bytecode from source can be done at a million lines per second; it is basically instant. You can compile bytecode ahead-of-time; that won't make programs any faster. There are lots of products (eg. using complex tracing-JIT methods) that try to make normally-interpreted languages faster, with varying amounts of success (but they can also increase start-times) however they are just inherently slower. You can still use interpreted languages to create performant applications, if used sensibly. Eg. as 'scripting' languages where most of the work is done via native code libraries. Or in interactive apps where most time is spent by the user typing or moving a cursor, with the actual tasks being trivial (eg. editing text). So just accept that some languages tend to be slower, but offer other advantages.
🌐
TutorialsPoint
tutorialspoint.com › why-is-python-slower-than-other-languages
Why is Python slower than other languages?
As we know, Python is an interpreted language, while C is a compiled language. Interpreted code is always slower than direct machine code because it takes a lot more instructions in order to implement an interpreted instruction than to implement an actual machine instruction.
🌐
Computer Science Wiki
computersciencewiki.org › index.php › Interpreted_and_compiled_languages
Interpreted and compiled languages - Computer Science Wiki
August 20, 2021 - Compiled languages are converted directly into machine code that the processor can execute. As a result, they tend to be faster and more efficient to execute than interpreted languages.
Top answer
1 of 10
35

I refute the premise. There are interpreters / REPLs for compiled, static languages, they're just not as much part of the common workflow as with dynamic languages. Though that also depends on the application. For example, scientists at CERN work a lot in C++ in the Root framework, and they also use the Cling interpreter a lot, an approach which combines many of the advantages of a fast compiled language and a slow interpreted one like Python, especially for scientific purposes.

With some other languages it's even more drastic. Haskell is a static, compiled language (in some ways even more static than OO languages), but it is very common to develop Haskell interactively using GHCi, either as a REPL (see the online version) or just as a quick typechecking pass to highlight what needs to be worked on. Once something is ready implemented, it'll then be part of a library that is always compiled, resulting in fast code, and that can then be called in either a fully-compiled program or in another interactive session.

Of course it can also go the other way around: typical interpreted languages like Python, JavaScript and Common Lisp are all possible to compile at least in some senses of the word (either JIT or a subset of the language can be statically compiled). Though in my opinion this approach is way more limited than starting with a strong statically typed programming language and then using it more interactively, it can still be a good option for optimising the bottleneck parts of an interpreted program, and is indeed commonly done.

2 of 10
18

Why isn't a thing to interpret a codebase for quick iterative development instead of generating code for a binary each time?

Many languages, including C and C++ don’t lend themselves to repl style interpreters. Making a one line or even one character change can have widespread impact to the behavior of a program (consider changes to a #define for example). Somewhat ironically, this same sort of avalanche effect of small code changes leading to large program changes also makes incremental compilation very difficult. So languages that take a very long time to compile will tend to be ones that are also troublesome to interpret.

Find elsewhere
Top answer
1 of 7
105

In programming language design and implementation, there is a large number of choices that can affect performance. I'll only mention a few.

Every language ultimately has to be run by executing machine code. A "compiled" language such as C++ is parsed, decoded, and translated to machine code only once, at compile-time. An "interpreted" language, if implemented in a direct way, is decoded at runtime, at every step, every time. That is, every time we run a statement, the intepreter has to check whether that is an if-then-else, or an assignment, etc. and act accordingly. This means that if we loop 100 times, we decode the same code 100 times, wasting time. Fortunately, interpreters often optimize this through e.g. a just-in-time compiling system. (More correctly, there's no such a thing as a "compiled" or "interpreted" language -- it is a property of the implementation, not of the language. Still, each language often has one widespread implementation, only.)

Different compilers/interpreters perform different optimizations.

If the language has automatic memory management, its implementation has to perform garbage collection. This has a runtime cost, but relieves the programmer from an error-prone task.

A language might be closer to the machine, allowing the expert programmer to micro-optimize everything and squeeze more performance out of the CPU. However, it is arguable if this is actually beneficial in practice, since most programmers do not really micro-optimize, and often a good higher level language can be optimized by the compiler better than what the average programmer would do. (However, sometimes being farther from the machine might have its benefits too! For instance, Haskell is extremely high level, but thanks to its design choices is able to feature very lightweight green threads.)

Static type checking can also help in optimization. In a dynamically typed, interpreted language, every time one computes x - y, the interpreter often has to check whether both x,y are numbers and (e.g.) raise an exception otherwise. This check can be skipped if types were already checked at compile time.

Some languages always report runtime errors in a sane way. If you write a[100] in Java where a has only 20 elements, you get a runtime exception. This requires a runtime check, but provides a much nicer semantics to the programmer than in C, where that would cause undefined behavior, meaning that the program might crash, overwrite some random data in memory, or even perform absolutely anything else (the ISO C standard poses no limits whatsoever).

However, keep in mind that, when evaluating a language, performance is not everything. Don't be obsessed about it. It is a common trap to try to micro-optimize everything, and yet fail to spot that an inefficient algorithm/data structure is being used. Knuth once said "premature optimization is the root of all evil".

Don't underestimate how hard it is to write a program right. Often, it can be better to choose a "slower" language which has a more human-friendly semantics. Further, if there are some specific performance critical parts, those can always be implemented in another language. Just as a reference, in the 2016 ICFP programming contest, these were the languages used by the winners:

1   700327  Unagi                       Java,C++,C#,PHP,Haskell
2   268752  天羽々斬                     C++, Ruby, Python, Haskell, Java, JavaScript
3   243456  Cult of the Bound Variable  C++, Standard ML, Python

None of them used a single language.

2 of 7
20

What governs the "speed" of a programming language?

There is no such thing as the "speed" of a programming language. There is only the speed of a particular program written by a particular progammer executed by a particular version of a particular implementation of a particular execution engine running within a particular environment.

There can be huge performance differences in running the same code written in the same language on the same machine using different implementations. Or even using different versions of the same implementation. For example, running the exact same ECMAScript benchmark on the exact same machine using a version of SpiderMonkey from 10 years ago vs a version from this year will probably yield a performance increase anywhere between 2×–5×, maybe even 10×. Does that then mean that ECMAScript is 2× faster than ECMAScript, because running the same program on the same machine is 2× faster with the newer implementation? That doesn't make sense.

Has this anything to do with memory management?

Not really.

Why does this happen?

Resources. Money. Microsoft probably employs more people making coffee for their compiler programmers than the entire PHP, Ruby, and Python community combined has people working on their VMs.

For more or less any feature of a programming language that impacts performance in some way, there is also a solution. For example, C (I'm using C here as a stand-in for a class of similar languages, some of which even existed before C) is not memory-safe, so that multiple C programs running at the same time can trample on each other's memory. So, we invent virtual memory, and make all C programs go through a layer of indirection so that they can pretend they are the only ones running on the machine. However, that is slow, and so, we invent the MMU, and implement virtual memory in hardware to speed it up.

But! Memory-safe languages don't need all that! Having virtual memory doesn't help them one bit. Actually, it's worse: not only does virtual memory not help memory-safe languages, virtual memory, even when implemented in hardware, still impacts performance. It can be especially harmful to the performance of garbage collectors (which is what a significant number of implementations of memory-safe languages use).

Another example: modern mainstream general purpose CPUs employ sophisticated tricks to reduce the frequency of cache misses. A lot of those tricks amount to trying to predict what code is going to be executed and what memory is going to be needed in the future. However, for languages with a high degree of runtime polymorphism (e.g. OO languages) it is really, really hard to predict those access patterns.

But, there is another way: the total cost of cache misses is the number of cache misses multiplied by the cost of an individual cache miss. Mainstream CPUs try to reduce the number of misses, but what if you could reduce the cost of an individual miss?

The Azul Vega-3 CPU was specifically designed for running virtualized JVMs, and it had a very powerful MMU with some specialized instructions for helping garbage collection and escape detection (the dynamic equivalent to static escape analysis) and powerful memory controllers, and the entire system could still make progress with over 20000 outstanding cache misses in flight. Unfortunately, like most language-specific CPUs, its design was simply out-spent and out-brute-forced by the "giants" Intel, AMD, IBM, and the likes.

The CPU architecture is just one example that has an impact on how easy or how hard it is to have a high-performance implementation of a language. A language like C, C++, D, Rust that is a good fit for the modern mainstream CPU programming model will be easier to make fast than a language that has to "fight" and circumvent the CPU, like Java, ECMAScript, Python, Ruby, PHP.

Really, it's all a question of money. If you spend equal amounts of money to develop a high-performance algorithm in ECMAScript, a high-performance implementation of ECMAScript, a high-performance operating system designed for ECMAScript, a high-performance CPU designed for ECMAScript as has been spent over the last decades to make C-like languages go fast, then you will likely see equal performance. It's just that, at this time, much more money has been spent making C-like languages fast than making ECMAScript-like languages fast, and the assumptions of C-like languages are baked into the entire stack from MMUs and CPUs to operating systems and virtual memory systems up to libraries and frameworks.

Personally, I am most familiar with Ruby (which is generally considered to be a "slow language"), so I will give two examples: the Hash class (one of the central data structures in Ruby, a key-value dictionary) in the Rubinius Ruby implementation is written in 100% pure Ruby, and it has about the same performance as the Hash class in YARV (the most widely-used implementation), which is written in C. And there is an image manipulation library written as a C extension for YARV, that also has a (slow) pure Ruby "fallback version" for implementations that don't support C which uses a ton of highly-dynamic and reflective Ruby tricks; an experimental branch of JRuby, utilizing the Truffle AST interpreter framework and Graal JIT compilation framework by Oracle Labs, can execute that pure Ruby "fallback version" as fast as the YARV can execute the original highly-optimized C version. This is simply (well, anything but) achieved by some really clever people doing really clever stuff with dynamic runtime optimizations, JIT compilation, and partial evaluation.

🌐
Sololearn
sololearn.com › en › Discuss › 2053754 › are-interpreted-languages-generally-faster-than-their-compiled-counterparts
Are interpreted languages generally faster than their compiled counterparts? | Sololearn: Learn to code for FREE!
An interpreted language processes your source code from scratch every time you run the program. A compiled language first translates the program to machine code or bytecode (in case of Java).
🌐
freeCodeCamp
freecodecamp.org › news › compiled-versus-interpreted-languages
Interpreted vs Compiled Programming Languages: What's the Difference?
January 10, 2020 - Programs that are compiled into native machine code tend to be faster than interpreted code. This is because the process of translating code at run time adds to the overhead, and can cause the program to be slower overall.
🌐
Codecademy
codecademy.com › home › what is the fastest programming language?
What is the Fastest Programming Language?
August 30, 2022 - Interpreted languages like Python, JavaScript, Ruby, and PHP run by converting your source code on the fly into machine code as it is running. Because this conversion process happens while the code is running and adds overhead, interpreted languages ...
🌐
Innokrea
innokrea.com › home › blog › compilation vs. interpretation (part 3)
Compilation vs. Interpretation (part 3) - Applications and software for companies, enterprises, Gdansk
June 25, 2024 - Interpreting code is slower than running compiled code because the interpreter must first analyze each expression and then execute corresponding actions based on that analysis, whereas compiled code solely executes actions.
🌐
DEV Community
dev.to › tofuwave › interpreted-languages-vs-compiled-languages-whats-the-difference-45eh
Interpreted Languages vs Compiled Languages: What's the Difference? - DEV Community
April 24, 2023 - One of the biggest disadvantages of interpreted languages is that they are generally slower than compiled languages. Because the interpreter has to translate code on the fly, it can't optimize the code as much as a compiler can.
Top answer
1 of 7
26

A compiled language like C is usually compiled directly into machine code. When you run the code, it is executed directly by the CPU.

A fully interpreted language like BASIC or PHP is usually interpreted each time it runs. When you execute your code, the CPU executes the interpreter, and the interpreter reads and executes your source code. (PHP can be classified as fully interpreted, since while it does use opcodes, they are usually thrown away after the execution.)

A bytecode interpreted language like Python, is compiled from source code to bytecode that is executed by a virtual machine. The CPU runs the VM, and the VM executes each bytecode instruction. In Python, the bytecode is compiled the first time code is executed.

In Java, bytecode is compiled ahead of execution. The Java VM also has a special feature called Just-in-time compilation. This means that during execution, it may compile some of the bytecode to machine code, which it can send to the CPU to execute directly.

In conclusion, with compiled languages, the CPU runs the code directly. In interpreted languages, the CPU usually runs the interpreter or virtual machine. This makes interpreted languages generally slower than compiled languages, due to the overhead of running the VM or interpreter.

NOTE: While we speak of interpreted and compiled languages, what we are really discussing is the usual execution style of a language. PHP can be compiled (using HipHop), and C can be interpreted (using Parrot VM).

2 of 7
7

OK, a lot of incorrect posts here, time for a long answer.

A compiler is basically clear - it translates a program form the source language to the target language. Both languages can be whatever - high level language, virtual machine bytecode, machine code.

An interpreter, on the other hand does not perform a translation, but directly performs the actions, prescribed by the source language construct, a.k.a. interprets it.

Let's consider a hypothetical add instruction in a stack based machine, which adds the two top elements of the stack and pushes the result back. An interpreter will directly perform that "add the two top elements and push the result back", in a manner similar to:

switch (op)
{

 ....

  case OP_ADD:  
    op1 = pop (stack);
    op2 = pop (stack);
    res = op1 + op2;
    push (stack, res);
    break;

...
}

As you can see, for a single add insn, there are many operations performed: reading and writing memory, incrementing and decrementing the stack pointer, the add operation itself, the overhead of the switch (if the interpreter is implemented that way), the overhead of the loop which reads each subsequent insn and decides how to process it, etc.

If the interpeter worked on an AST, it may look like:

swicth (op)
{
   ...
   case OP_ADD:
     op1 = tree->eval (left);
     op2 = tree->eval (right);
     return op1 + op2;
   ...
}

Again, many, many insns to perform whatever is required by the add semantics.

🌐
Musing Mortoray
mortoray.com › home › why language interpreters and virtual machines are slow
Why language interpreters and virtual machines are slow - Musing Mortoray
January 7, 2026 - Like a CPU, they step through instructions and execute them, but in software, rather than hardware. This gives them flexibility in how they implement the language’s abstract machine, but results in them being slower than native code.
🌐
DEV Community
dev.to › kopylov_vlad › why-programming-languages-are-slow-1b2d
Why programming languages are slow - DEV Community
August 23, 2021 - ... Second of all java and c# compile to byte code, not python and what other language you typed there. Each modern interpreted language doesn't read and execute each line of the source code at the same time.
🌐
Study.com
study.com › computer science › computer programming › programming languages
Interpreted Languages | Study.com
Interpreted languages tend to be slower because they perform translation during runtime, adding overhead to the execution process. Each line of code must be parsed, analyzed, and converted to an executable form every time the program runs, which ...
🌐
Quora
quora.com › Is-it-true-that-interpreted-languages-are-slower-than-compiled-languages-If-so-why
Are compiled or interpreted languages faster? - Quora
November 9, 2015 - Answer (1 of 30): Imagine that you had a piece of text which you wanted to communicate to someone who didn’t speak the language that the text was written in. You have 2 choices. First, you could translate the whole text into their language and write it out, then give it to them to read.