Several reasons:
- faster development loop, write-test vs write-compile-link-test
- easier to arrange for dynamic behavior (reflection, metaprogramming)
- makes the whole system portable (just recompile the underlying C code and you are good to go on a new platform)
Think of what would happen if the system was not interpreted. Say you used translation-to-C as the mechanism. The compiled code would periodically have to check if it had been superseded by metaprogramming. A similar situation arises with eval()-type functions. In those cases, it would have to run the compiler again, an outrageously slow process, or it would have to also have the interpreter around at run-time anyway.
The only alternative here is a JIT compiler. These systems are highly complex and sophisticated and have even bigger run-time footprints than all the other alternatives. They start up very slowly, making them impractical for scripting. Ever seen a Java script? I haven't.
So, you have two choices:
- all the disadvantages of both a compiler and an interpreter
- just the disadvantages of an interpreter
It's not surprising that generally the primary implementation just goes with the second choice. It's quite possible that some day we may see secondary implementations like compilers appearing. Ruby 1.9 and Python have bytecode VM's; those are ½-way there. A compiler might target just non-dynamic code, or it might have various levels of language support declarable as options. But since such a thing can't be the primary implementation, it represents a lot of work for a very marginal benefit. Ruby already has 200,000 lines of C in it...
I suppose I should add that one can always add a compiled C (or, with some effort, any other language) extension. So, say you have a slow numerical operation. If you add, say Array#newOp with a C implementation then you get the speedup, the program stays in Ruby (or whatever) and your environment gets a new instance method. Everybody wins! So this reduces the need for a problematic secondary implementation.
Several reasons:
- faster development loop, write-test vs write-compile-link-test
- easier to arrange for dynamic behavior (reflection, metaprogramming)
- makes the whole system portable (just recompile the underlying C code and you are good to go on a new platform)
Think of what would happen if the system was not interpreted. Say you used translation-to-C as the mechanism. The compiled code would periodically have to check if it had been superseded by metaprogramming. A similar situation arises with eval()-type functions. In those cases, it would have to run the compiler again, an outrageously slow process, or it would have to also have the interpreter around at run-time anyway.
The only alternative here is a JIT compiler. These systems are highly complex and sophisticated and have even bigger run-time footprints than all the other alternatives. They start up very slowly, making them impractical for scripting. Ever seen a Java script? I haven't.
So, you have two choices:
- all the disadvantages of both a compiler and an interpreter
- just the disadvantages of an interpreter
It's not surprising that generally the primary implementation just goes with the second choice. It's quite possible that some day we may see secondary implementations like compilers appearing. Ruby 1.9 and Python have bytecode VM's; those are ½-way there. A compiler might target just non-dynamic code, or it might have various levels of language support declarable as options. But since such a thing can't be the primary implementation, it represents a lot of work for a very marginal benefit. Ruby already has 200,000 lines of C in it...
I suppose I should add that one can always add a compiled C (or, with some effort, any other language) extension. So, say you have a slow numerical operation. If you add, say Array#newOp with a C implementation then you get the speedup, the program stays in Ruby (or whatever) and your environment gets a new instance method. Everybody wins! So this reduces the need for a problematic secondary implementation.
Exactly like (in the typical implementation of) Java or C#, Python gets first compiled into some form of bytecode, depending on the implementation (CPython uses a specialized form of its own, Jython uses JVM just like a typical Java, IronPython uses CLR just like a typical C#, and so forth) -- that bytecode then gets further processed for execution by a virtual machine (AKA interpreter), which may also generate machine code "just in time" -- known as JIT -- if and when warranted (CLR and JVM implementations often do, CPython's own virtual machine typically doesn't but can be made to do so e.g. with psyco or Unladen Swallow).
JIT may pay for itself for sufficiently long-running programs (if memory's way cheaper than CPU cycles), but it may not (due to slower startup times and larger memory footprint), especially when the types also have to be inferred or specialized as part of the code generation. Generating machine code without type inference or specialization is easy if that's what you want, e.g. freeze does it for you, but it really doesn't present the advantages that "machine code fetishists" attribute to it. E.g., you get an executable binary of 1.5 to 2 MB in lieu of a tiny "hello world" .pyc -- not much point!-). That executable is stand-alone and distributable as such, but it will only work on a very specific narrow range of operating systems and CPU architectures, so the tradeoffs are quite iffy in most cases. And, the time it takes to prepare the executable is quite long indeed, so it would be a crazy choice to make that mode of operation the default one.
Things aren't just black and white. At the very least, they're also big and small, loud and quiet, blue and orange, grey and gray, long and short, right and wrong, etc.
Interpreted/compiled is just one way to categorize languages, and it's completely independent from (among countless other things) whether you call the same language a "scripting language" or not. To top it off, it's also a broken classification:
- Interpreted/compiled depends on the language implementation, not on the language (this is not just theory, there are indeed quite a few languages for which both interpreters and compilers exist)
- There are language implementations (lots of them, including most Ruby implementations) that are compilers, but "only" compile to bytecode and interpret that bytecode.
- There are also implementations that switch between interpreting and compiling to native code (JIT compilers).
You see, reality is a complex beast ;) Ruby is, as mentioned above, frequently compiled. The output of that compilation is then interpreted, at least in some cases - there are also implementations that JIT-compile (Rubinius, and IIRC JRuby compiles to Java bytecode after a while). The reference implementation has been a compiler for a long time, and IIRC still is. So is Ruby interpreted or compiled? Neither term is meaningful unless you define it ;)
But back to the question: "Scripting language" isn't a property of the language either, it depends on how the language is used - namely, whether the language is used for scripting tasks. If you're looking for a definition, the Wikipedia page on "Scripting language" may help (just don't let them confuse you with the notes on implementation details such as that scripts are usually interpreted). There are indeed a few programs that use Ruby for scripting tasks, and there are doubtless numerous free-standing Ruby programs that would likely qualify as scripts (web scraping, system administration, etc).
So yes, I guess one can call Ruby a scripting language. Of course that doesn't mean a ruby on rails web app is just a script.
Yes.
Detailed response:
A scripting language is typically used to control applications that are often not written in this language. For example, shell scripts etc. can call arbitrary console applications.
Ruby is a general purpose dynamic language that is frequently used for scripting.
You can make arbitrary system calls using backtick notation like below.
`<system command>`
There are also many excellent Ruby gems such as Watir and RAutomation for automating web and native GUIs.
For definition of scripting language, see here.
Are Python, PHP, JavaScript, Perl & Ruby Classed As Scripting Languages?
Why is Ruby so much used in startup/scale-up over other languages?
interpreter - Is Ruby really an interpreted language if all of its implementations are compiled into bytecode? - Stack Overflow
What language features, if any, give JS a performance advantage over Python and Ruby?
What is an interpreted language?
An interpreted language is a programming language that executes instructions directly, without the need for a separate compilation step. The instructions are translated and executed line by line, making it easier and quicker to develop and test code.
Which programming languages are commonly interpreted?
Some popular interpreted languages include Python, JavaScript, Ruby, Perl, and PHP. These languages are widely used in web development, scripting, and automation tasks due to their ease of use and quick development process.
How does an interpreted language differ from a compiled language?
In an interpreted language, the code is executed line by line, while in a compiled language, the entire code is converted into machine language before execution. This means that interpreted languages offer more flexibility in terms of modifying and testing code on the fly.
Videos
Hi people,
I'm coming from the world of Java / Kotlin web applications, I'm starting getting curious about other languages that are really liked among big companies.
I am a total beginner and I don't understand why a company would go for Ruby instead of another interpreted languages such as Python or JavaScript stack.
Although I totally understand that bootstrapping a MVP with Ruby is soooo easy, it feels to me that maintaining a code base with hundreds of files, a big domain, a lot of tests, ... is very hard with it (so it is with python).
Can you explain me like I'm 5 why companies are going for Ruby. If you remove the "because the first dev only knew Ruby so he bootrapped very fast, we were in PRD and then we continued building over his code" reason, what is left for Ruby?
TLDR: I don't won't to be offensive, I would just like to talk with Ruby senior programmers to understand that hype, the salariés, why all of this is that justified? How is it to maintain ruby codebase, ok it's easy to have a easy CRUD blog app with article and commente, but what about a whole marketplace?
Thanks :)
EDIT: Thanks to all of you for your answers, you rock!
Nearly every language is "compiled" nowadays, if you count bytecode as being compiled. Even Emacs Lisp is compiled. Ruby was a special case because until recently, it wasn't compiled into bytecode.
I think you're right to question the utility of characterizing languages as "compiled" vs. "interpreted." One useful distinction, though, is whether the language creates machine code (e.g. x86 assembler) directly from user code. C, C++, many Lisps, and Java with JIT enabled do, but Ruby, Python, and Perl do not.
People who don't know better will call any language that has a separate manual compilation step "compiled" and ones that don't "interpreted."
Yes, Ruby's still an interpreted language, or more precisely, Matz's Ruby Interpreter (MRI), which is what people usually talk about when they talk about Ruby, is still an interpreter. The compilation step is simply there to reduce the code to something that's faster to execute than interpreting and reinterpreting the same code time after time.
Hi all – Are there aspects of Python and Ruby as languages that prevent them from being optimized to the extent that JavaScript has?
Different way to ask this question: If you had a $75 million budget, could you produce a Python or Ruby JIT that was as performant as Chromium's V8 JIT or Microsoft's Chakra? (You could for example pay 75 elite developers $333,333 per year for three years...)
I ask in part because Python and Ruby have had maybe 15 years to match JS runtime performance, and they haven't done so. Sometimes people make the argument that JS got fast because of economics – that Google, Microsoft, Apple, and Mozilla threw a lot of money at JS JIT development. But Python and Ruby are huge open source projects, and my expectation would be that such projects have plenty of talented developers and should in principle be able to match commercial performance, eventually...
Are there language features that give a performance advantage to JavaScript, or disadvantage Python and Ruby?
Yes, I'm aware of: PyPy and the new JIT in Ruby/mruby 2.6 or 2.7. Neither seems to approach V8 or Chakra performance. PyPy is impressive though (but it only supports older versions of Python, not 3.7 or 3.8). I'm also aware of Unladen Swallow, an aborted effort toward an LLVM-based JIT for Python. That was a couple of young developers, which is partly why I asked the $75 million budget question. And jruby.
Thanks.