“Looping” an audio file with Sox, Lame and mkfifo

Today I needed a very long (3 hour) MP3 audio file to use for an experiment; a test file with some music on it. My first thought was to start a MP3 audio recorder, turn on the radio, and leave for 3 hours.

But impatience is among the three great virtues of a programmer, so I turned to Google instead, seeking command line tools for audio manipulation. It turns out that sox and lame will do the job. I installed the tools – here is the Debian / Ubuntu invocation:

apt-get install sox lame

then grabbed an MP3 file of a piece of music (Peter T. Noonan, album “Cafe at Arles”, track 9 “Life’s Old Road”, if you are curious) and repeated it a few times:

sox music.mp3 foo.wav repeat 3

lame foo.wav longmusic.mp3

This worked well, the first time… but consumed a lot of disk space. To get to 3 hours I would need enough disk space for a 3 hour uncompressed WAV file. Unfortunately Sox does not support MP3 output, and I didn’t want to compress to a format it does support, then uncompress and recompress again. So I used a Unix/Linux FIFO pipe instead of a file, with Sox running in the background to fill the pipe with data for Lame:

mkfifo foo.mp3

sox music.mp3 foo.wav repeat 10 &

lame foo.wav longmusic.mp3

a little while later, longmusic.mp3 is a very long MP3 file… but not long enough, because sox fails when the “virtual” WAV file it is writing reaches 2 Gb in size, just as it fails with a real WAV file at that size. That was about 1 hour and 41 minutes – not long enough; so I ended up looking elsewhere:

The Ugly Hack

It turns out that Lame will tolerate an MP3 which consists of several appended MP3 files as its input. It complains but keeps processing when encountering the extra headers in the middle of such a file. So this solution with cat, a pipe, and Lame, worked:

cat 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 1.mp3 | lame – longmusic.mp3

A little while later, I had a 3 hour long, valid MP3 file.

Conference / User Group Member Photographs

Here is an idea I picked up at AYE and again at ETech; I mentioned it at the Ruby UG last month, and am writing it up here to encourage its use.

A common problem at growing and changing groups of people (such as user group members, conference attendees who see each other rarely, etc.), is remembering names and faces. To make it easier for everyone to remember everyone else, it’s very helpful to have a photograph of each person on the group’s web site. Sorting and labelling such photos can be a lot of work, so here is an approach to get it done with very little work.

Ingredients:

  • Digital camera
  • Blank paper
  • Wide-line markers

At a meeting, during a break or at the start/end, all of the willing attendees write their name on a blank sheet of paper, in letters at least 1 inch high. Then they hold up the paper and have a photo taken with the digital camera. A low-res, head-and-shoulders shot works best.

Afterwards, transfer the images to a page on the group’s web site. By taking them low-res, no scaling will be needed. Because each person is holding a sheet of paper with their name, noone needs to sift through a pile of images, associating photos with names. Simply post them as-is, unsorted. This works amply well enough for at least several dozen people, and probably well enough for up to 100.

St. Louis Code Camp – I’ll be speaking, you can too

My friend Brian Button is still looking for St. Louis Code Camp speakers; if you’ve thought about giving a user group talk, this is a great way to get your feet wet.  A “code camp” is an informal event, likely with a lot of group discussion.  You don’t need a large topic or a long speech.  Simply pick a technical topic (perhaps something from your work, or something you’ve been wanting to learn on the side), prepare a few examples and notes, and sign up.

I’m already signed up to give a talk on Lua.

Refactoring to Patterns? No, learn the primitives.

Last night at XPSTL, John Sextro gave a talk on the “Move Embellishment to Decorator” refactoring as described in Joshua Kerievsky’s Refactoring to Patterns book. I greatly enjoyed and benefitted from the original Design Patterns book (from the Gang of Four) which was already old (published 1994) when I heard about it and bought it in 1998. (By the way, when I looked it up on Amazon to put in the link above, Amazon reminded me that: “You purchased this item on February 18, 1998”.)

I enjoyed John’s talk, and I hope he does more of them.  He hit a few rough spots along the way (the usually excellent IntelliJ IDEA IDE failed mysteriously, for example), but worked through it and reached the target of composable decorators. The rough spots led to some interesting diversions also.
I’m not sold on the “refactoring to patterns” idea though; it seems like a distraction from a more important goal: to gain deep experience and understanding of how to use the underlying “primitives” (encapsulation, abstraction, polymorpism, low coupling, high cohesion, etc.). Once you grasp the primitives, the design patterns are useful mainly as a tool for talking about how something works – in other words, write good code, then perhaps notice that it follows one of the “patterns”, if you find that helpful in explaining how the code works.

Several times at XPSTL, we’ve had lengthy conversations about how to choose whether to use “Strategy” or “Command” or “Decorator” or …. I’m not convinced that these conversations are helpful. My answer is that it is silly to look for a list of rules in choosing which pattern to use. Read the patterns, use them to learn good ways to use and combine the primitives. Then do that in your code:

  • Notice that you can benefit from polymorphism, and use it.
  • Notice that you can split a class in to two separately cohesive parts, and do so.
  • Notice that you could get composability by replacing inheritance with aggregation, and so so.

You’ll end up with the right “Pattern” – and you probably won’t care.

Intel Integrated Performance Primitives Bewilderment

I’ve been evaluating Intel’s Integrated Performance Primitives, a set of libraries for low-level signal processing and image manipulation. These appear to be very well engineered at the lowest levels, but the packaging is a mess, as I’ll describe below. The point of these libraries is that they use the various enhanced CPU instruction sets (MMX, SSE, SSE2, SSE3, are there more?) of the various Intel processors. Such enhancement are very useful if you are processing a lot of images in a hurry… such as in a video stream.

Big gripe: nearly all of the CPU enhancements that these libraries use, are also available on AMD hardware; but being Intel libraries, these don’t auto-detect and use these things on AMD CPUs. Therefore an off-the-shelf, simple benchmark will show AMD radically slower. Apparently with the right tweaks, the AMD stuff can be used well also, and works fine. Now I can see why Intel would do this, but it has the side effect of making them look scared. Looking scared is not good marketing. I believe it would have been better for them to support AMD seamlessly, and let the benchmarks show that the Intel stuff is a bit faster… if indeed that is what they show.

Now for the bewildering deployment model: These libraries arrive as a large pile of DLLs, with most of them in several flavors, once for each processor family.  But rather than putting the “which logic to load” code in a DLL, that code is supplied only in a .lib file – so it’s quite amenable to use from C code, and much less so from non-C code.

Next bewilderment: To the “rest of us”, getting a machine with a 64-bit processor (Intel EM64T, AMD64) is no big deal, it’s just another CPU family.  But to Intel the IPP developers, it’s a separate product version, with a separate way to invoke it, and no clear example code to show how to auto-load the right libraries (and of the 32-bit, any of the 64-bit) as needed depending on CPU type.  Apparently it did not occur to them that this could have been seamless.

Next bewilderment: Until a few years ago, the “Intel JPEG Library” was a popular, free, and very fast JPEG compression / decompression library.  It’s not free anymore, it is now part of IPP.  I have no objection to that, IPP is a very good value at $200 per developer.  The bewilderment is that rather than supply the working “Intel JPEG Library” as part of IPP, they instead supply it only as example code.  The deveoper than compiles the sample code to get an IGL DLL that runs on top of IPP, with all of the deployment issues mentioned above.  Again, this could have been trivial, and would meet a lot of developers’ needs, but is instead a mess of accidental complexity.  IPP provides a large number of low-level functions useful to write a JPEG compressor / decompression, but a lot of us (myself and this fellow, for example) don’t want to learn the inner working of JPEG; it seems as though IPP could benefit greatly from some higher level API calls.

Why I will also never deploy with Java Web Start again

Keith Lea pointed out that he will never deploy with Java Web Start again.

With Web Start in its current form, he’s be deploying with it long before I will use it again.  “Never” is much too soon.. here is why, echoing and expanding on Keith’s experiences. Some of these things are not Web Start’s fault per se. But they are unavoidable with Web Start.

A key piece of background: the most important point of Web Start for us is the notion of auto-update. We periodically put a application Jar up, update the JNLP to point to it, and our users get the new version running on their PC without a re-install process.

Java Web Start doesn’t work for a large number of users

We have the same trouble here. With a large number of users, inevitably Web Start does not work for all of them. For a few percent of users, it simply does not work. We don’t know why. Yes, we upgrade them all to the current Java, the current browser version, etc. Most of the time we get around this with a cycle or two of completely removing then reinstalling Java. In a few cases, the solution was to repave the machine entirely. Both of these are a considerable burden on our customers.

Users don’t like the experience

The part they hate most is this: the auto-update to the current application version is not optional. We upload a new version, and the next time a user runs the application (if Web Start’s update works… see below), they get the new version… even if the new version is a couple of megabytes of bloat… even if the user is on a slow, wireless link… even if the user only wanted to use the application for a minute or two.

Java/JWS detection in the browser is necessary, and unreliable

Mercifully, we have been able to avoid the problem by other means; our customers install Java on a machine right before they launch our application with Web Start.

Users don’t know what JNLP files are

… and they unwittingly misuse them

There is only one right way to use a JNLP file with Web Start, if you want Web Start’s auto update capability to work: pass around / link to / email the URL that points it it. But in the Real World, user misuse JNLP file in many ways:

  • Email the JNLP to a group of users. The users then click on it in the email… so they get the old JNLP each time they click on it. From there the auto-update (based on the URL in side the JNLP) might or might not work.
  • Copy the JNLP file on to the desktop. They do this because the “feature” in web start to create desktop icons, appears to semi-randomly work or not work.
  • Put the JNLP file on a machine, then disable web browsing on that machine… thus disabling Web Start’s auto-update… invisibly.

You cannot make the user experience much better

Most of the unpleasantness happens outside of the application’s control.

Random Caching

Web Start has a cache. The browser has a cache. Between the two of these, sometimes it is not possible to persuade Web Start to update an application to the current code (in the current JNLP file on the server). There is no visibility in to where/when/how-long the old JNLP data is cached. Sometimes clearing the browser cache fixes this, sometimes it is necessary to remove and reinstall the Web Started application. This is enormously frustrating when a field user needs the current application version Right Now, which you deployed to your Web Start web server some weeks ago.

Wireless Hostillity

Wireless network connections are sometimes slow and unreliable. Web Start always downloads a whole file or fails, as far as we can tell; thus a user on a weak connection might repeatedly wait several minutes for a Web Start update, have it fail, then start again at the beginning. Of course they can’t cancel the update as noted above.

If Not Web Start, What?

Like Keith, I have concluded that a native, non-Web Start solution is the way to do. We’ve built this sort of this for several past applications, and while it takes some coding and testing to get it right (work we planned to avoid by using th off the shelf Web Start approach), it tends to work very well. Here are some key design points for auto-updating applications:

  • The user is in control. Tell the user you have a new version to offer. Perhaps start downloading it in the background. But don’t try to force them to update now.
  • Tell the user about the new version in the application, perhaps in the status bar; not only at application startup.
  • Keep the old version, and expose it to the user. The new version might not work, so if they user can click a button to run the old version instead, you have transformed an emergency in to an inconvenience. (It has been suggested that we should not update our JNLP files, rather put up a new JNLP and pass out its URL. This is totally unfeasible for our, since most users would update rarely or never.)
  • Download in the background. The user can keep getting their work done with version N while version N+1 is downloading.
  • No silent failures. If the application can’t access its update information, expose this fact clearly to users and support staff.
  • Dodge the browser cache. Do whatever you must do, to have as few layers of caching as possible; ideally just one layer, in the application auto update mechanism itself. One way to avoid the browser cache is to use a “raw” HTTP-over-TCP library in the application.
  • Incremental Download – if a new version download retrieves 1.9 megabytes of a 2.0 megabyte file, keep the 1.9 and start there next time.