I tried comparing the stack efficiency of a variety of programming languages when computing the Fibonacci sequence using the naïve recursive technique just for fun. The code is mostly the same in all languages; however, I'll provide a java version:
So, using this approach with input 40, I obtained the following timings:
They were taken on a twin core Intel processor on an Ubuntu 10.04 box with the versions of each language accessible in the official repository.
I understand that functional languages like ocaml suffer from the slowdown that results from treating functions as first order citizens, and I have no problem explaining CPython's running time because, as I follow this article, it's the only interpreted language in this test, but I was impressed by the java running time, which is half of the c for the same algorithm! Would you put this down to JIT compilation?
How would you interpret these findings?
Java:
public class Fib {
public static int fib(int n) {
if (n < 2) return 1;
return fib(n-1) + fib(n-2);
}
public static void main(String[] args) {
System.out.println(fib(Integer.valueOf(args[0])));
}
}
So, using this approach with input 40, I obtained the following timings:
Java:
C: 2.796s
Ocaml: 2.372s
Python: 106.407s
Java: 1.336s
C#(mono): 2.956s
They were taken on a twin core Intel processor on an Ubuntu 10.04 box with the versions of each language accessible in the official repository.
I understand that functional languages like ocaml suffer from the slowdown that results from treating functions as first order citizens, and I have no problem explaining CPython's running time because, as I follow this article, it's the only interpreted language in this test, but I was impressed by the java running time, which is half of the c for the same algorithm! Would you put this down to JIT compilation?
How would you interpret these findings?
Last edited by a moderator: