Here's a performance comparison of LuaJIT against other VMs on different architectures:

Click on the headings to see the benchmark results. You need to turn on JavaScript to see the interactive charts.

Measurement Methods

All measurements have been taken under Linux 2.6. All shown Lua benchmarks are single-threaded, so only a single CPU core was used. The system was freshly booted and otherwise idle. All power-management features have been turned off, no hypervisor module was loaded. It was ensured all executable code and data files were cached in memory prior to each measurement.

The C code of all VMs was compiled with GCC 4.4.3 with the default compiler flags given in the Makefiles (except for Lua on x86 where -O2 -fomit-frame-pointer was used).

The base for the comparisons are the user CPU times as reported by the shell built-in time command (i.e. TIMEFORMAT="%U"). The accuracy of the timings is limited by the 250Hz system timer frequency, which may result in a divergence of up to ±4ms. All benchmarks are run three times for each VM — only the best result is reported here.

If possible, all benchmark runs have been scaled for runtimes in the multi-second to minutes range to improve the overall measurement accuracy. The only exceptions are for non-scalable benchmarks or when out-of-cache effects would dominate the execution time (e.g. array3d). The variance between identical runs is generally very low (< 0.5%) and is not shown (whiskers would clump together in the bar graph).

Startup time for running the executable of the VM itself is included in all measurements, but is negligible (< 100µs). Likewise, warm-up and compile-time for the JIT compilers is included. But, again, this has no measurable effect, since LuaJIT's compiler warms up very quickly (LJ1: 1st call of method, LJ2: 57th loop iteration) and is exceptionally fast (compile times in the microsecond to millisecond range).

About the Benchmarks

Most of the benchmarks have their origins in the Computer Language Benchmarks Game. It presents a comparison of the performance of different languages and implementations for a small set of benchmarks. Many of these benchmarks have changed over time (both spec and code) and the selection of benchmarks has varied a lot, too. Benchmark results shown in previous versions of LuaJIT or the CLBG are not directly comparable. Note that the CLBG currently only shows Lua, not LuaJIT.

Most of the other benchmarks shown are Lua ports of standard benchmarks. E.g. SciMark for Lua has been split up into individual benchmarks which are run with a fixed iteration count (to get a runtime and not an auto-scaled score).

The presented benchmark results are only indicative of the overall performance of each VM. They should not be construed as an exact prediction for the possible speedup of any specific application. It's advisable to benchmark your own application code before drawing any conclusions.