Aiming for Mainstream

Over on defmacro today, a new article appeared: defmacro – Why Exotic Languages Are Not Mainstream in which the author laments that while there appear to be various choices to use Haskell on Windows, it turns out that all of them are, in some way, not ready for prime time… or even for effective hobbiest use.

I’ve noticed this myself, in my last few forays in to esoteric languages: the illusion of plenty of choices, runs in the the reality of no good choices.  This is not a universal problem; I’ve had great results with Ruby, Python, and Lua, all of which are to some extent esoteric.  The thing that those languages have in common is that there is at least one (and generally, only one) robust, production grade implementation with a community actively supporting it.

If you want to see your favorite language gain acceptance, spend your time creating / maintaining / vigorously supporting a production-ready implementation.

Assembly code from 1994, proto-DSL

Tonight I came across a chunk of x86 assembly code that I wrote for a university class in 1994. I present it here in its original form, complete with 1994 file modification date:

KSPOOL.zip

The thing I notice about this code in retrospect is that I used a macro (this was assembled with a macro assemler) to make some menu-key-dispatch code succinct and declarative in its appearance. This is quite low-level but in a sense not all that different from the “domain specific language” idea which has grown popular recently.

Refactoring to Patterns? No, learn the primitives.

Last night at XPSTL, John Sextro gave a talk on the “Move Embellishment to Decorator” refactoring as described in Joshua Kerievsky’s Refactoring to Patterns book. I greatly enjoyed and benefitted from the original Design Patterns book (from the Gang of Four) which was already old (published 1994) when I heard about it and bought it in 1998. (By the way, when I looked it up on Amazon to put in the link above, Amazon reminded me that: “You purchased this item on February 18, 1998”.)

I enjoyed John’s talk, and I hope he does more of them.  He hit a few rough spots along the way (the usually excellent IntelliJ IDEA IDE failed mysteriously, for example), but worked through it and reached the target of composable decorators. The rough spots led to some interesting diversions also.
I’m not sold on the “refactoring to patterns” idea though; it seems like a distraction from a more important goal: to gain deep experience and understanding of how to use the underlying “primitives” (encapsulation, abstraction, polymorpism, low coupling, high cohesion, etc.). Once you grasp the primitives, the design patterns are useful mainly as a tool for talking about how something works – in other words, write good code, then perhaps notice that it follows one of the “patterns”, if you find that helpful in explaining how the code works.

Several times at XPSTL, we’ve had lengthy conversations about how to choose whether to use “Strategy” or “Command” or “Decorator” or …. I’m not convinced that these conversations are helpful. My answer is that it is silly to look for a list of rules in choosing which pattern to use. Read the patterns, use them to learn good ways to use and combine the primitives. Then do that in your code:

  • Notice that you can benefit from polymorphism, and use it.
  • Notice that you can split a class in to two separately cohesive parts, and do so.
  • Notice that you could get composability by replacing inheritance with aggregation, and so so.

You’ll end up with the right “Pattern” – and you probably won’t care.

Intel Integrated Performance Primitives Bewilderment

I’ve been evaluating Intel’s Integrated Performance Primitives, a set of libraries for low-level signal processing and image manipulation. These appear to be very well engineered at the lowest levels, but the packaging is a mess, as I’ll describe below. The point of these libraries is that they use the various enhanced CPU instruction sets (MMX, SSE, SSE2, SSE3, are there more?) of the various Intel processors. Such enhancement are very useful if you are processing a lot of images in a hurry… such as in a video stream.

Big gripe: nearly all of the CPU enhancements that these libraries use, are also available on AMD hardware; but being Intel libraries, these don’t auto-detect and use these things on AMD CPUs. Therefore an off-the-shelf, simple benchmark will show AMD radically slower. Apparently with the right tweaks, the AMD stuff can be used well also, and works fine. Now I can see why Intel would do this, but it has the side effect of making them look scared. Looking scared is not good marketing. I believe it would have been better for them to support AMD seamlessly, and let the benchmarks show that the Intel stuff is a bit faster… if indeed that is what they show.

Now for the bewildering deployment model: These libraries arrive as a large pile of DLLs, with most of them in several flavors, once for each processor family.  But rather than putting the “which logic to load” code in a DLL, that code is supplied only in a .lib file – so it’s quite amenable to use from C code, and much less so from non-C code.

Next bewilderment: To the “rest of us”, getting a machine with a 64-bit processor (Intel EM64T, AMD64) is no big deal, it’s just another CPU family.  But to Intel the IPP developers, it’s a separate product version, with a separate way to invoke it, and no clear example code to show how to auto-load the right libraries (and of the 32-bit, any of the 64-bit) as needed depending on CPU type.  Apparently it did not occur to them that this could have been seamless.

Next bewilderment: Until a few years ago, the “Intel JPEG Library” was a popular, free, and very fast JPEG compression / decompression library.  It’s not free anymore, it is now part of IPP.  I have no objection to that, IPP is a very good value at $200 per developer.  The bewilderment is that rather than supply the working “Intel JPEG Library” as part of IPP, they instead supply it only as example code.  The deveoper than compiles the sample code to get an IGL DLL that runs on top of IPP, with all of the deployment issues mentioned above.  Again, this could have been trivial, and would meet a lot of developers’ needs, but is instead a mess of accidental complexity.  IPP provides a large number of low-level functions useful to write a JPEG compressor / decompression, but a lot of us (myself and this fellow, for example) don’t want to learn the inner working of JPEG; it seems as though IPP could benefit greatly from some higher level API calls.

Why I will also never deploy with Java Web Start again

Keith Lea pointed out that he will never deploy with Java Web Start again.

With Web Start in its current form, he’s be deploying with it long before I will use it again.  “Never” is much too soon.. here is why, echoing and expanding on Keith’s experiences. Some of these things are not Web Start’s fault per se. But they are unavoidable with Web Start.

A key piece of background: the most important point of Web Start for us is the notion of auto-update. We periodically put a application Jar up, update the JNLP to point to it, and our users get the new version running on their PC without a re-install process.

Java Web Start doesn’t work for a large number of users

We have the same trouble here. With a large number of users, inevitably Web Start does not work for all of them. For a few percent of users, it simply does not work. We don’t know why. Yes, we upgrade them all to the current Java, the current browser version, etc. Most of the time we get around this with a cycle or two of completely removing then reinstalling Java. In a few cases, the solution was to repave the machine entirely. Both of these are a considerable burden on our customers.

Users don’t like the experience

The part they hate most is this: the auto-update to the current application version is not optional. We upload a new version, and the next time a user runs the application (if Web Start’s update works… see below), they get the new version… even if the new version is a couple of megabytes of bloat… even if the user is on a slow, wireless link… even if the user only wanted to use the application for a minute or two.

Java/JWS detection in the browser is necessary, and unreliable

Mercifully, we have been able to avoid the problem by other means; our customers install Java on a machine right before they launch our application with Web Start.

Users don’t know what JNLP files are

… and they unwittingly misuse them

There is only one right way to use a JNLP file with Web Start, if you want Web Start’s auto update capability to work: pass around / link to / email the URL that points it it. But in the Real World, user misuse JNLP file in many ways:

  • Email the JNLP to a group of users. The users then click on it in the email… so they get the old JNLP each time they click on it. From there the auto-update (based on the URL in side the JNLP) might or might not work.
  • Copy the JNLP file on to the desktop. They do this because the “feature” in web start to create desktop icons, appears to semi-randomly work or not work.
  • Put the JNLP file on a machine, then disable web browsing on that machine… thus disabling Web Start’s auto-update… invisibly.

You cannot make the user experience much better

Most of the unpleasantness happens outside of the application’s control.

Random Caching

Web Start has a cache. The browser has a cache. Between the two of these, sometimes it is not possible to persuade Web Start to update an application to the current code (in the current JNLP file on the server). There is no visibility in to where/when/how-long the old JNLP data is cached. Sometimes clearing the browser cache fixes this, sometimes it is necessary to remove and reinstall the Web Started application. This is enormously frustrating when a field user needs the current application version Right Now, which you deployed to your Web Start web server some weeks ago.

Wireless Hostillity

Wireless network connections are sometimes slow and unreliable. Web Start always downloads a whole file or fails, as far as we can tell; thus a user on a weak connection might repeatedly wait several minutes for a Web Start update, have it fail, then start again at the beginning. Of course they can’t cancel the update as noted above.

If Not Web Start, What?

Like Keith, I have concluded that a native, non-Web Start solution is the way to do. We’ve built this sort of this for several past applications, and while it takes some coding and testing to get it right (work we planned to avoid by using th off the shelf Web Start approach), it tends to work very well. Here are some key design points for auto-updating applications:

  • The user is in control. Tell the user you have a new version to offer. Perhaps start downloading it in the background. But don’t try to force them to update now.
  • Tell the user about the new version in the application, perhaps in the status bar; not only at application startup.
  • Keep the old version, and expose it to the user. The new version might not work, so if they user can click a button to run the old version instead, you have transformed an emergency in to an inconvenience. (It has been suggested that we should not update our JNLP files, rather put up a new JNLP and pass out its URL. This is totally unfeasible for our, since most users would update rarely or never.)
  • Download in the background. The user can keep getting their work done with version N while version N+1 is downloading.
  • No silent failures. If the application can’t access its update information, expose this fact clearly to users and support staff.
  • Dodge the browser cache. Do whatever you must do, to have as few layers of caching as possible; ideally just one layer, in the application auto update mechanism itself. One way to avoid the browser cache is to use a “raw” HTTP-over-TCP library in the application.
  • Incremental Download – if a new version download retrieves 1.9 megabytes of a 2.0 megabyte file, keep the 1.9 and start there next time.

Make a DVD with ffmpeg

For a project we have going at Oasis Digital, we have explored various libraries for creating video DVDs from computer-generated content until program/script control. There are quite a few ways to do this; one that is appealing for a command-line junkie is the combination of ffmpeg, dvdauthor, and mkisofs. It took considerable research to figure out what commands to string together for a simple scenario:

  • you have some video in AVI format (for example, an MJPEG AVI from a DV video camera)
  • you have some background music in mp3 format
  • you want a simple one-title one-chapter DVD with that video and audio

There are plenty of sites with long and complex sets of commands to accomplish these things. But for this simplest case, the essential commands are:

ffmpeg -y -i video.avi -i audio.mp3 -target ntsc-dvd -aspect 4:3 dvd.mpg

mkdir DVD

dvdauthor -x file.xml # there is a way to avoid the file by putting a few more options here

mkisofs -dvd-video -o dvd.iso DVD

Of course, there is considerable other work involved in wiring up a full solution, but that is more project specific. I hope these example commands shorten the research time for the next fellow who needs to do this core processing.