I've been following Bug Labs choice of JVM quite closely. After a series of comparisons between JamVM, CacaoVM and PhoneME they adopted PhoneME (initial test here and the follow-up). I blogged on the results of the first test, which were favourable to JamVM. However, for the second test, they sorted out the problems with running PhoneME's JIT, and the positions of JamVM and PhoneME reversed.
This was disheartening, but the results spoke for themselves. However, one odd fact is that the second test did not give any details of start-up time. JamVM clearly won this in the first test, and it's unlikely enabling PhoneME's JIT would have changed this.
So, I read with great interest the recent blog entry where they've got CacaoVM/GNU Classpath running on the BUG. It appears they will still ship with PhoneME, but CacaoVM/GNU Classpath will be an option for customers who require the Classpath exception.
So what? Well, I'd like an explanation why they seem so reluctant to use JamVM. From their own tests, JamVM came out on top for start-up, and came second in performance to PhoneME with JIT.
Perhaps they've finally cracked the performance problems with CacaoVM. But JamVM is not configured for top performance on ARM either (by default, the inlining interpreter is disabled on ARM).
Of course, there are many other advantages to JamVM on embedded systems besides start-up time. It has its own, compacting garbage-collector with full support for soft, weak and phantom references in addition to class unloading. CacaoVM relies on Boehm GC, exhibiting memory fragmentation issues, and it has no support for soft/weak/phantom references or class-unloading.
Things like this makes me very disheartened. As I've said before, it makes me wonder why I continue to work on JamVM at all. However, giving up will be a case of "cutting my nose off to spite my face".
If they've hit any problems with JamVM I'll be quite happy to work with them to fix them, but I've received no feedback or requests. Unfortunately, I have been unable to leave any comments on the blog entry itself. On enquiring with the webmaster, it appears that this is new software which is at an early stage. However, they've put this functionality at the top of their TODO list, and I can expect it in a day or two (thanks Brian).
To finish on a positive note, I've done quite a lot of work on JamVM over the last few months, including memory footprint and performance improvements over JamVM 1.5.1. Hopefully I'll make a new release before Christmas.
Tuesday, 9 December 2008
Monday, 17 November 2008
JamVM/GNU Classpath/iPhone roundup
It's a year since JamVM was ported to the iPhone/iPod Touch. A quick browse on Google shows up a couple of interesting things:
- Running Knopflerfish OSGi
- Running JBoss (minimal configuration)
Sunday, 16 November 2008
Lend me an ear while I call you a fool*
As the developer and maintainer of JamVM I get a regular stream of emails about licencing issues (2 so far this week). But this one left me speechless:
* With apologies to Jethro Tull.
What is your intent for users of the JamVM code? Is it just the core of the VM that you have licensed using GPLv2, and so any changes to that core code or code linked with it must be provided as opensource? Since the class libraries come from Gnu Classpath, they are covered under the so-called 'classpath exception', and don't infect code that link with it, correct? Thus, is it allowed for a company to make a product using an unmodified JamVM as a standalone program that executes proprietary and unpublished Java code, without running afoul of GPLv2?While the question is clear, the use of the pejorative terms "infect" and "afoul" towards GPLed code immediately gets my back up. My instinct is simply to ignore it, but is there any more appropriate response?
* With apologies to Jethro Tull.
Thursday, 31 July 2008
Embedded JVM comparison
Buglabs have done a comparison of open-source JVMs on their embedded ARM platform (the BUG, based on an ARM1136JF-S core). The tested VMs were PhoneME advanced, Cacao and JamVM. The results are very interesting :
http://bugblogger.com/java-vms-compared-160/
JamVM comes out the fastest, followed by PhoneME and then Cacao. On startup time, JamVM also comes out top (3 ms), followed by Cacao (12 ms) and PhoneME (16 ms).
The caveat is that PhoneME's JIT is not being used because of kernel issues (and presumably, its startup time would increase even further). The real mystery, however, is the poor performance of Cacao. A good result for JamVM is meaningless if the test isn't fair.
The benchmarks used in the test are dominated by floating point. Looking at the Technical Reference Manual for the ARM core shows that it has a Vector Floating-Point (VFP) coprocessor. As long as the toolchain is correctly setup this should be supported by JamVM. The question is whether Cacao's JIT correctly produces floating-point instructions or always uses emulation.
Another possibility is cache behaviour. The performance improvement of JIT code being offset by increased I-cache misses (an interpreter should fit entirely within the cache). JamVM's inlining interpreter is disabled on ARM, the direct-threaded interpreter being used by default. This is because inlining/super-instructions showed no performance improvement despite 200-300% improvement being seen on AMD64 (at least on an ARM920T). Cache behaviour was my tentative conclusion but I didn't have time to investigate it further. I'm still hoping that the recent changes to the inlining interpreter will show gains on ARM.
http://bugblogger.com/java-vms-compared-160/
JamVM comes out the fastest, followed by PhoneME and then Cacao. On startup time, JamVM also comes out top (3 ms), followed by Cacao (12 ms) and PhoneME (16 ms).
The caveat is that PhoneME's JIT is not being used because of kernel issues (and presumably, its startup time would increase even further). The real mystery, however, is the poor performance of Cacao. A good result for JamVM is meaningless if the test isn't fair.
The benchmarks used in the test are dominated by floating point. Looking at the Technical Reference Manual for the ARM core shows that it has a Vector Floating-Point (VFP) coprocessor. As long as the toolchain is correctly setup this should be supported by JamVM. The question is whether Cacao's JIT correctly produces floating-point instructions or always uses emulation.
Another possibility is cache behaviour. The performance improvement of JIT code being offset by increased I-cache misses (an interpreter should fit entirely within the cache). JamVM's inlining interpreter is disabled on ARM, the direct-threaded interpreter being used by default. This is because inlining/super-instructions showed no performance improvement despite 200-300% improvement being seen on AMD64 (at least on an ARM920T). Cache behaviour was my tentative conclusion but I didn't have time to investigate it further. I'm still hoping that the recent changes to the inlining interpreter will show gains on ARM.
Wednesday, 25 June 2008
Glastonbury!
Finally all ready for Glastonbury. Leave tomorrow morning, and should arrive around midday.
Finishing the welding took longer than expected (it always does). Two new shock absorbers and a couple of brake pipes and the camper passed the MOT on Friday.
Today, I replaced the oil strainer (old VW aircooled engines don't have a modern oil filter) and changed the oil. Decided to change from monograde 30 weight to a modern 15W-50 multigrade. Opinions differ as to which is best (VW recommended monograde back in the early 70s but multigrades have improved considerably since then). I've used monograde for 2 years and I'll see if it runs better.
Glastonbury should be fun, but I'm keen to get back to JamVM. I'm not taking a laptop!
Finishing the welding took longer than expected (it always does). Two new shock absorbers and a couple of brake pipes and the camper passed the MOT on Friday.
Today, I replaced the oil strainer (old VW aircooled engines don't have a modern oil filter) and changed the oil. Decided to change from monograde 30 weight to a modern 15W-50 multigrade. Opinions differ as to which is best (VW recommended monograde back in the early 70s but multigrades have improved considerably since then). I've used monograde for 2 years and I'll see if it runs better.
Glastonbury should be fun, but I'm keen to get back to JamVM. I'm not taking a laptop!
Thursday, 12 June 2008
Flowers in your hair (or at least on your camper)
It's now a month since I finished working. I still have no regrets leaving -- this is the first real time I've had off for 4 and a half years, and I've only made a small dent in the amount of work that's built up in the last few years. In total, I've been working away from home for the past 7 years, and I'm getting to the age where I can't do it any longer.
Having said that, I will soon have to think about finding another job. I don't want to live off my savings for more than a few months. Maybe to the end of the summer. But I need to start to consider my options (which at present is not very much).
So what have I been doing for the last month? I've been restoring my old 1972 VW camper (it's almost as old as I am). I did quite a lot of welding for the MOT last year but the final finishing up was rushed due to lack of time. I've got yet more to do for the MOT this year, which is up just before Glastonbury (a long running music festival, for the non-UK readers). Taking an old hippy-wagon to Glastonbury is a lot of fun (I'm thinking about putting on a load of stick-on flowers for this year).
Last week I finished respraying the front in its original colours (orient blue over pastel white) and replaced the spare tyre with a VW symbol. This required longer than expected because the front panel needed welding and a new panel had to be welded in the left corner (it was all filler).
Tomorrow, I've got to finish removing the near-side inner sil, weld in a new one, and replace the rear jacking point. Then I've got to start on the outer sil. It should then be ready for the MOT (only 8 days remaining). I replaced the front jacking points and outriggers last year.
So what about JamVM? I'm still working on it in the evenings, as if I was still working. I'm currently still bogged down in a load of inlining interpreter optimisations that I've been prototyping for the last few months. I've now got to put everything back together and tidy things up for a release. With testing, this is still at least a month or so away.
Contrary to my previous posts, I'm no longer thinking about giving JamVM up. I've decided I do get sufficient "return" for my time to make it worthwile. Giving your time away for free when there's no money coming in is difficult, but I don't want to end up just another odd-jobber doing up his camper.
Having said that, I will soon have to think about finding another job. I don't want to live off my savings for more than a few months. Maybe to the end of the summer. But I need to start to consider my options (which at present is not very much).
So what have I been doing for the last month? I've been restoring my old 1972 VW camper (it's almost as old as I am). I did quite a lot of welding for the MOT last year but the final finishing up was rushed due to lack of time. I've got yet more to do for the MOT this year, which is up just before Glastonbury (a long running music festival, for the non-UK readers). Taking an old hippy-wagon to Glastonbury is a lot of fun (I'm thinking about putting on a load of stick-on flowers for this year).
Last week I finished respraying the front in its original colours (orient blue over pastel white) and replaced the spare tyre with a VW symbol. This required longer than expected because the front panel needed welding and a new panel had to be welded in the left corner (it was all filler).
Tomorrow, I've got to finish removing the near-side inner sil, weld in a new one, and replace the rear jacking point. Then I've got to start on the outer sil. It should then be ready for the MOT (only 8 days remaining). I replaced the front jacking points and outriggers last year.
So what about JamVM? I'm still working on it in the evenings, as if I was still working. I'm currently still bogged down in a load of inlining interpreter optimisations that I've been prototyping for the last few months. I've now got to put everything back together and tidy things up for a release. With testing, this is still at least a month or so away.
Contrary to my previous posts, I'm no longer thinking about giving JamVM up. I've decided I do get sufficient "return" for my time to make it worthwile. Giving your time away for free when there's no money coming in is difficult, but I don't want to end up just another odd-jobber doing up his camper.
Monday, 28 April 2008
Third time lucky?
In JamVM 1.5.0 I released the "inlining interpreter" which copies code blocks together in a similar way to a simple JIT (but the code is compiled by gcc, rather than being generated natively as in a JIT). This achieved an impressive speed improvement and I've been keen to optimise it further.
The major thing which has been in my sights is the remaining dispatches between adjacent basic blocks. For instance the fallthrough edge of an "if" and across the edge caused by a jump target.
Currently, the unit of inlining is a basic block because this has only one entry point and one exit point. Blocks which contain instructions which need to be rewritten (quickening, e.g. after symbolic resolution) must be executed first using threaded dispatch. If we inlined across blocks we will end up with blocks with multiple entry and multiple exit points. In this case, depending on control flow, we may complete the block before all the block has been executed, or we may never reach the end of the block because a side exit is always taken. In the first case, inlining can't be done, and in the second inlining will never occur.
My first approach to solve this was simplistic, and was more of an experiment to "test the water". So I wasn't too surprised when it didn't yield any speed improvement.
My second attempt was much more complex (after inlining check the edges and create a longer block if the blocks across the edge are both inlined, and fix up internal targets). But this showed no significant general speed improvement (order of 2%) although specific microbenchmarks showed over 100%.
So for the last week I've been trying to explain the results. After some experiments I've come to the conclusion that it is due to instruction cache locality. Basically, the merged block (which may be merged many times) ends up in a different location to the non-mergeable blocks which remain in the initial position. Previously, inlining exhibited good cache locality due to blocks being allocated in execution order. This was destroyed by block merging. The effects of this counteracted the speed improvements leading to no change overall.
This was the position I was in on Friday (which added to my despondency). However, on the weekend I rethought the problem and came up with a third approach. I've partially implemented it and hopefully should be able to test it in a few days. Fingers crossed!
The major thing which has been in my sights is the remaining dispatches between adjacent basic blocks. For instance the fallthrough edge of an "if" and across the edge caused by a jump target.
Currently, the unit of inlining is a basic block because this has only one entry point and one exit point. Blocks which contain instructions which need to be rewritten (quickening, e.g. after symbolic resolution) must be executed first using threaded dispatch. If we inlined across blocks we will end up with blocks with multiple entry and multiple exit points. In this case, depending on control flow, we may complete the block before all the block has been executed, or we may never reach the end of the block because a side exit is always taken. In the first case, inlining can't be done, and in the second inlining will never occur.
My first approach to solve this was simplistic, and was more of an experiment to "test the water". So I wasn't too surprised when it didn't yield any speed improvement.
My second attempt was much more complex (after inlining check the edges and create a longer block if the blocks across the edge are both inlined, and fix up internal targets). But this showed no significant general speed improvement (order of 2%) although specific microbenchmarks showed over 100%.
So for the last week I've been trying to explain the results. After some experiments I've come to the conclusion that it is due to instruction cache locality. Basically, the merged block (which may be merged many times) ends up in a different location to the non-mergeable blocks which remain in the initial position. Previously, inlining exhibited good cache locality due to blocks being allocated in execution order. This was destroyed by block merging. The effects of this counteracted the speed improvements leading to no change overall.
This was the position I was in on Friday (which added to my despondency). However, on the weekend I rethought the problem and came up with a third approach. I've partially implemented it and hopefully should be able to test it in a few days. Fingers crossed!
JamVM : back on the map
I feel like a kid who's thrown a tantrum and been rewarded with an ice-cream. In my last post I really thought I was asking a "serious and legitimate question" but it's difficult not to squirm when you get the praise you were secretly hoping for...
So I'm grateful to all those who replied. JamVM is firmly back on the map :)
So I'm grateful to all those who replied. JamVM is firmly back on the map :)
Friday, 25 April 2008
JamVM : road to nowhere?
Change logs and development notes never give any insight into the wider whys and wherefores of a project. Perhaps that's for the better; stick to the facts, that's what engineers are good at. But as this is my first real post on JamVM (now that I know everything is working) I think it's appropriate.
I started JamVM because I stopped being paid to work on proprietary VMs (after leaving a suitable gap). Because of worries of tainting I started my own VM rather than helping out on another. For the same reason I never directly contributed to GNU Classpath either. Of course, I wanted to make it smaller than any other, and I also wanted to make it open-source.
I work on JamVM because I enjoy it. It's also nice to get the (occasional) positive email from users, and to see people using it on a whole variety of hardware. The download statistics are also still going up (last month downloads from sourceforge was over 1000 for the first time, and this doesn't include all the distros that package it and embedded buildroots).
The problem is I sometimes wonder whether I'm flogging a dead horse and I'd be better off contributing my time to something else. I'm not trying to throw my toys out of the pram either; this is a serious and legitimate question.
Of course, the reason is OpenJDK and to a lesser extent PhoneME. Java is now open-sourced, mostly unencumbered and finally packaged. At best am I wasting my time, and at worst am I fragmenting and confusing things? Opinions, dear readers, gratefully received.
I discussed this with Jeroen Frijters at FOSDEM. In danger of misrepresenting things due to alcohol abuse, the general upshot was "why care?". If you still enjoy doing it, then do it. So I am.
I started JamVM because I stopped being paid to work on proprietary VMs (after leaving a suitable gap). Because of worries of tainting I started my own VM rather than helping out on another. For the same reason I never directly contributed to GNU Classpath either. Of course, I wanted to make it smaller than any other, and I also wanted to make it open-source.
I work on JamVM because I enjoy it. It's also nice to get the (occasional) positive email from users, and to see people using it on a whole variety of hardware. The download statistics are also still going up (last month downloads from sourceforge was over 1000 for the first time, and this doesn't include all the distros that package it and embedded buildroots).
The problem is I sometimes wonder whether I'm flogging a dead horse and I'd be better off contributing my time to something else. I'm not trying to throw my toys out of the pram either; this is a serious and legitimate question.
Of course, the reason is OpenJDK and to a lesser extent PhoneME. Java is now open-sourced, mostly unencumbered and finally packaged. At best am I wasting my time, and at worst am I fragmenting and confusing things? Opinions, dear readers, gratefully received.
I discussed this with Jeroen Frijters at FOSDEM. In danger of misrepresenting things due to alcohol abuse, the general upshot was "why care?". If you still enjoy doing it, then do it. So I am.
First Post!
With the orbits of the Java planets colliding I've decided it's about time that JamVM got a blog! It's only taken 5 years :)
Subscribe to:
Posts (Atom)