The World Doesn’t Wait

The following charts illustrate the evolution in concentration of carbon dioxid in the atmosphere and the temperature across thousands of years. The data come from verifiable and scientific sources so you can check by yourself if you want.

This first graph shows a record with both carbon dioxide and temperature that illustrates the strong correlation in the evolution of these two parameters (click on the image to zoom in).

Carbon Dioxide and Temperature Record

This little red arrow in the upper right corner is a bit puzzling. So let’s have a better view of the CO2 level by itself:


In short the maximum amplitude ever observed has doubled in less than 200 years. Basically the difference between the upper part and the lower part of those peeks means several meters of ice on top of a lot of people’s head. I let you imagine what this translates into when it’s a rise of temperature instead of a decrease (the temperature has barely started catching up on the CO2 level). But of course it has nothing to do with human activity.

The U.S.A. are responsible for 24% of the total quantity of CO2 emitted (data of year 2002) with only 4.6% of the world population. By comparison the European Union emits 15% of the total CO2 for 7.5% of the world population (and are working on it) and Japan 5% for 1.94% of the world population (and are working on it).

There are 2 possible ways to react to this:

  • Deny, who knows, the world might be flat after all.
  • Despair, screw everybody including our children, we all die someday anyway and drink more whisky in the meantime.
  • Do something about it.

If you like the third option better it’s not that complicated, just do your homework and vote for someone with the political will to do something. Choose anybody, I don’t care, but choose well (maybe you can avoid those whose campain is heavily financed by major oil companies though).

The Future Won’t Be Statically Typed

We’re having some “crystal ball discussions” sometimes at the office, trying to see what the future of Java, Ruby, computer languages, software and the world in general is going to be. The easy part is the world of course, with no fish in 40 years it’s easy to guess how all that is going to end up. For computing languages however, it’s a bit trickier.

I’m more and more convinced that statically typed languages will come to an end (just like the fish but quicker), replaced by duck typing based languages. There’s been many criticisms of duck typing but usually those voicing them don’t have an extensive experience of development using non statically typed languages or are Java fundamentalists. I haven’t heard anybody developing significantly in Ruby complaining about it not being statically typed. So I’ll spare you the whole argument about safety and compile-time checking because in the end it doesn’t really matter. You can theorize all you want but the pure fact is that static typing is a pain and pain slows us down and doesn’t help us doing things.

So very practically I can think of only one reason that could still make the duck typing alternative less appealing to some: the IDE. But IDEs will eventually catch up, using instrumented interpreters to detect types dynamically for example. There are solutions, even if they don’t cover all situations, 95% is good enough.

Say goodbye to your type definitions, generics and typed annotations (even annotations are statically typed for christ sake!) because I’m pretty sure you won’t see much of them 5 years from now.

Builds And Transitive Dependencies

I came to think that transitive dependencies are mostly evil. Really. They can be useful sometimes, if you keep them on a tight leash. But chances are you’re going to shoot yourself in the foot sooner or later using them. First let me illustrate with severals use cases and examples that I’ve experienced first hand.

First example, you have a simple web project and use several libraries. And 3 or 4 of them happen to depend on Xerces as almost the whole word depends on Xerces for some reason. Now you buid a WAR file and your build system is nice enough to put all your dependencies, including transitive ones, in its WEB-INF/lib directory. Nice enough? Think again. You’re going to end up with 3 or 4 different versions of Xerces in there. Which one is going to be selected at runtime? Well, do you want to bet on whether your app server follows alphabetical order or not?

Second example, you’re using XDoclet. Or the Spring Framework. Or any other framework including several modules, some necessary, some optional. All those modules sort of depend on each other. So chances are that by pulling one, you’re going to pull everything. And usually people don’t think much about the way others are going to download their own project outisde of the regular distribution, they just include everything they can think of as their dependencies. And you end up with a lot of garbage that you will never use and beat the record of the biggest software distribution ever built.

Another example. You depend on project A. Project A depends on B. And B depends on C. So far so good. Now it turns out that the repositories containing all these dependencies are mostly a mess. So somebody just comes up and removes the version of C that B depends on. Your build is broken.

A last one. This one would be actually pretty funny if it wasn’t so pathetic. On Apache Ode we have a JBI wrapper to allow deployment in a JBI container. We’re using a Maven plugin from ServiceMix to build Service Assemblies. Now this plugin happens to depend on the whole ServiceMix engine because it also includes tasks to auto deploy and run the server directly. So we end up pulling all the ServiceMix project just for a plugin. Now here is the best part: ServiceMix uses Ode. It’s part of its dependencies. So when we build our stuff for the first time, we end up downloading all ServiceMix plus all OUR stuff that we’re currently building. How crazy is that?


Given all this mess, what’s a build system to do? I think the transitive dependency problem has no solution, there are some techniques that can be used to keep some control but deep down it’s really flawed. Because the dependencies that are right for you can’t be guessed, just like the code you’re writing can’t be all generated.

However I think we’re still going to add it into Raven. Yep, you heard me well. And there are 2 reasons why:

  • It can save you a lot of time at the early stage of a project or for prototyping. It’s really nice to get a setup quickly up and running.
  • I already know that it’s going to be the most asked for feature. Implementing every single stupid feature that people ask for is a bad idea. However this feature is ony partially stupid and it can also make sense (see above).

So to give you weapons against chaos, pain and despair (exageration is my friend), I’d like to keep transitivity under control to allow you to opt out at any time. To do that, the transitivity would be toggable, you’d be able to turn it off and then specify everything explicitly. When you choose to do so, as we’re all a bunch of lazy asses, Raven would let you generate a dependency declaration with all the transitiveness you need. You’d then clean it up a bit to fit your needs, adding rationality to an insane accumulation of libraries, and when it’s all pretty, include it in your build.

So any other strong opinion on transitive dependencies? Ideas?

Pictures by Kevin Day and Dadooron.

Coming Next in Raven

We’ve started working on several improvements and new features to improve your Java builds with Matthew. All of which should come when we’re ready for a 2.0 release of Raven. I’m going to give you a quick overview here and I’ll probably go into some more details and rants in later writeups.

So I haven’t been writing for a while and the main reason is that I’ve been pretty busy. There’s been some book writing (getting closer to being published, I’ve finished most of the content), a lot of Apache ODE development for Intalio and of course Raven. Plus the winter in San Francisco is pretty nice so far, so surfing and swimming is tempting.Skunk Works › Edit — WordPress

Anyway I’m not here to wine about time scarcity, life is too short and all that. This is wasting,

The first big change you will see it that we’re now tied to JRuby. Honestly I would have liked to keep CRuby but there’s no real point when the goal is building Java and this gives space for several optimizations and nice features.

The main optimization is Matthew’s build server that he integrated from JRake. The idea is that you start a VM only once on your machine and then the build is always executd in it. The client just tells the server which task to execute and where and voila! The downside is that you have to keep the server running but it’s fairly small, the upside is that you have absolutely no startup cost when building. And as most tasks directly use Java classes (javac, javadoc, jar, junit and all the jays), the execution is also much faster than if you shell out like Raven used to do.

Another big modification is the use of “channels”. I think that one will deserve its own blog post, I don’t want to summarize too much and confuse you. But in short this simplifies the interactions between your tasks, they will be able to reuse easily the result produced by other tasks (code generation, jar production and all the usual suspects) without you needing to care about it. And making it easy when you really need to care about it.

Yet another goodie brought from JRake by Matthew is multithreaded builds. Your tasks run in parrallel whenever possible. And that saves you time too, especially in this new multicore era with your processors multiplying quicker than a spermatozoid meeting an ovule.

There will also be Ant integration just because rebuilding all the nice tasks that have been built for Ant would take us too much time. I’m not talking about mkdir or ssh here, Ruby offers better alternatives. But for XDoclet, XMLBeans or some VCSs support, it’s a must have. We’ve started with AntBuilder but we may very well end up with Antwrap as its maintainer, Caleb Powell, is a pretty active guy and AntBuilder is dormant.

And that’s not all. Better logging. Much, much better logging. Better JUnit support. Maybe some Continuous Integration integration. And maybe more.

Stay tuned…

Pic from Deborah Lattimore.

JRake and Raven Are Now One

We’ve been discussing and sharing thoughts for some time now with Matthew Foemmel on various problems all related to building Java code using Rake. And it’s been quite an interesting discussion. As we realized that our visions and interests had a lot in common, we decided that our respective projects, Raven and JRake, could be merged into one. Working together, we’ll probably be able to build a much better tool. Besides, it’s always more fun to collaborate, share code and ideas.

For pragmatic reasons, meaning that it’s always a pain in the ass to find a good name, development will happen within Raven. Not that Raven is such a good name (it’s actually far too close to Maven, people seem to think it’s a Maven clone even if it’s actually a totally different beast ) but at least it’s there. So we’ve already started doing some stuff for a future 2.0 branch. Pretty exciting stuff.

I’ll blog more about what we’ll be working on, the improvements and new features that are coming. I believe Matthew is going to do the same on his blog so you can already update your RSS reader. Building Java code is definitely going to become easier and easier…

Picture by superlocal

Java Gems Are Out There

I’ve finally finished the building of Raven‘s Java Gem repository. It’s right there:

In case you were wondering there’s exactly 11 256 gems. I think Rubyforge has about 5 000 of them so that’s HUGE! But there’s also a lot of junk, the Maven 2 repository isn’t what I would call clean, there’s some redundancy mostly due to package naming changes. So if you spot duplicates or bad gems, don’t hesitate to report it using Raven’s bug tracker.

I expect the repository to be useful not only for Raven, the availability of an index of almost all available Java artifacts is, IMO, invaluable for many usages. Just for the kick of it you can try:

gem install –source xstream-xstream

I’ve also published a new release for Raven, already 1.2.2! There’s no big change yet but it uses the new central Gem repository as a default for auto downloads. So instead of using gem, just create your project rakefile like the following:

dependency ‘compile_deps’ do |t|
    t.deps << [‘xstream’]
javac ‘compile’ => ‘compile_deps'

Download will automatically happen, picking the latest version. But this is a bad habit only useful for prototyping though, I would always recommend using explicit versions.

Java Gems Repository

Thanks to a very nice donation from Tom Ayerst, there’s going to be a central repository of Java Gems. I’m currently running a small script that gets all jars from Maven repositories and wraps them in a Gem. Those are going to be published on a public website. So very, very soon you will be able to get jars by using RubyGems! And using Raven is going to be even easier!

You won’t have to build your own Gem repository anymore, one is going to be provided. It’s just going to be about writing:

dependency ‘compile_deps’ do |t|
t.deps << [{‘commons-logging’ => ‘1.0.4’}, ‘commons-pool’]
javac ‘compile’ => ‘compile_deps’

And voila! The missing libraries will be installed in you local gem repository automatically from the central one and used from there by Raven. Just one more thing before I let you go: yes, Raven has a dependency management system similar to Maven, but NO Raven isn’t like Maven. It’s a totally different beast and if you’re not convinced, just give it a try.

Thanks again Tom!

Picture by alphadesigner.

Internet Will Go Down (And We’re All Gonna Die)

I just came across a post from Nicolas G. Carr giving some credit to the current idea that some day, the whole net will go down because of the surcharge brought by the muliplication of video diffusion services on the net. This is a pretty common idea these days that I’ve seen mirrored here and there. To Nicolas’ credit he ends up his post smoothing up a bit the Nostradamus style-prediction with:

“The video boom, and the Venice Project in particular, may not bring the Net down, but it will likely reveal whether the current “abundance” of bandwidth is a lasting phenomenon or just a transitory one.”

The truth is, the bandwidth scarceness is an impression caused by a local limitation in the U.S. The internet infrastructure is aging here. My DSL line is bringing me 1.5Mbps which is supposed to be fast. Before moving from France, I used to have 10 to 12 times more which is pretty common in Europe. And I’ve been over 5Mbps for at least 3 years. Several ISPs provide video on demand and several TV channels over DSL over there, which doesn’t mean a few videos here and there but a permanent video stream. Granted it’s only between your ISP and your home and not at the whole Net scale but it’s a proof that the technology exists and it’s already mainstream.

Heck, there’s been over 100Mbps in Japan for more than a year now with fiberglass. They have a definite advantage: the country is small, very urban with high population density. Investing in wire infrastructure is definitely worthy.

So if there’s a bandwidth shortage to come, it’s not going to be global but most probably local to North America. My hunch is that the network being the oldest here, it’s also the first one to hit the wall. And the country is pretty large which makes renewing the lines very expensive. Nobody wants to finance the infrastructue renewal when it’s far easier to continue the exploitation of an existing network. Paradoxally, the country where the Internet business is the most developed is also far behind in terms of bandwidth availablity.

Eventually, the Internet giants will find this limitation annoying. The Net users and the Net providers could very quickly find common interests.

Linux Is Beautiful, Especially With Windows Software

This a post where I’m going to say a lot of good things about a commercial product. However I don’t work for the product company, nor do my 16 years old brother, my mother or my wife. I’m not paid for writing whatever I’m going to write, I’m a free man (well, a married free man). Plus, given the popularity of my blog, this post wouldn’t worth more than 50 cents which is far under my minimum donation threshold of $20 or 17 euros (payable by wire transfers).

I’m a Linux user for quite a while now and have always been annoyed by people sending me videos or documents that are Windows specific. You know, these very interesting PPS. However for most of these there’s an alternative like OpenOffice for the aforementioned document types. However it’s not perfect, the document will never look quite the same which can be really annoying. I also have a personal weakness for which I haven’t been able to find a workaround so far: M$ Money.

But all these problems are now history, once again the pinguin wins! I’ve rediscovered CodeWeavers CrossOver and it’s really great. It wraps Wine with all kind of goodies so that you don’t spend a whole week to get something starting but crashing every 2 minutes. To install the Office suite for example, just give it your installation CD or the setup executable. Then it runs the Office install. Exactly the same one. And the Windows reboot simulation is only 5 seconds if you see what I mean.

I’ve also found a solution to my nasty M$ Money addiction by switching to Quicken, its pretty similar and also runs on CrossOver. Life is so beautiful sometimes, I’m almost crying. I can even run Internet Explorer with Beryl transparency! But I don’t do use it too often, who knows, a worm could sneak in.

Now I can say that I’m really a free man (well, free from W$, as I’m still married), I can run everything from my beloved kubuntu. No more dual boot. No more old laptop running Windows. Happy ever after.

By the way, to finish the commercial review, I didn’t break my wallet. CrossOver is only $40 which is about two minimum donations or one generous one.

Update: I’ve just found out that CrossOver also supported iTunes. It’s an older version (4.9) but that doesn’t matter much. The point is I can now stream to my Airport Express wih Linux! No more buggy and unsupported raop_play for me.

Securing On The Small

I still keep on opening and closing my car. It’s easy, I have a small remote for it, just like almost everybody nowadays. But what for? I came to think of it today. I don’t have anything of value in it, my radio is probably worth $20 and the only other significant valuable is a pack of chewing gum. So why do I close my car? Hell, because that’s what everybody does. It would be pretty stupid to get a window broken because some stupid guy was short of chewing-gums though.

I’ve also been thinking about convertibles. These cars are almost always open and it’s not really a problem. So why do I still close my car? Well, as I said, the functionality is there and it’s pretty harmless. Harmless? I’m not really a specialist about car parts but a security system closing four doors plus the trunk, locking and unlocking working both with a small remote and a pretty complex key must not be cheap. I would say a grand, maybe two.

What do I care most about here, the car itself or the pack of chewing-gum? Wouldn’t these couple of thousand dollars be best spent securing the car ignition instead of the whole car cockpit? Security is all about money, if there’s a quick and cheap way to steal something of value then it’s going to be a very popular stealing item. You just have to make the stealing price high enough so that it’s not worth stealing anymore. Another thing to consider is what you want to make secure. It’s fairly easy to make a small ignition system attack-proof, the car cockpit is another story and the obvious weak points are windows. Roughly, the bigger, the harder.

So I think I would be much better off with a car that doesn’t close.

Securing software is pretty much the same thing. I’ve been working for companies spending large amounts of money to implement senseless security procedures. Forcing the usage of different protocols on both sides of a DMZ. Not opening firewalls for perfectly valid communications, pushing the generalization of SOAP/HTTP (which brings its own pains). Securing on the large instead of securing on the small.

In security maybe more than anywhere else, one size fits all doesn’t work. Secure only where it makes sense and do it on the small.

Picture by ac_jalali