Excellent JavaScript talk from Yahoo

Over at Yahoo Video you can watch an excellent talk by Doug Crockford on JavaScript (part 1). (part 2, part 3, part 4) This is likely the best introduction to JavaScript I have seen, and worthwhile even if you’ve been using JS for years.

Why does JavaScript matter?

1) It is ubiquitous now (in nearly every browser, in Flash as ActionScript, etc.)

2) It is likely to be the default choice for building scriptable Java applications, due to the Rhino JS interpreter “in the box” in Java 1.6

Update: These videos are more conveniently all on one page here.

High Quality Screen Recordings

At Oasis Digital we’ve found that we can communicate effectively with each other and with customers, across time and space, using screen + audio recording (also called screencasts or screen videos). We use these to demonstrate a new feature, to explain how code works, to described how a new feature should work, etc. The communication is not as good as a live, in-person meeting/demo, but the advantages often outweigh that factor:

  1. No travel.
  2. No need to syncronize schedules.
  3. The receiving person can view the recording repeatedly, at their convenience.
  4. Customers and develoeprs who join the project team later, can look at old recordings to catch up.

It turns out that I am unusually picky about the quality of such recordings; I’ve written up some technical notes on how to get good results, and posted them: HighQualityScreenRecordings.pdf.

A few highlights:

  • A reasonably fast computer can both run application and record screen video at the same time; but if you will be recording the use of an application that generates a lot of disk activity, you must save the video to separate hard drive (internal, external, network server, etc.) from the hard drive you are running your OS and applications from. (For applications that generate little disk activity, a single system hard drive works fine.)
  • Use a headset-style microphone, and record in a quiet place: close the door, turn off the music, etc.
  • Adjust your audio levels well. Please. This is the most common and most annoying problem with screencast and podcast recordings I find.
  • Bytes are cheap; use a sufficiently large window and sufficiently high bitrate.

Many more details are in the PDF linked above.

Google Tech Talks

Google, a mecca for top notch programmers, attracts many top speakers to give talks on (generally) technical topics. They graciously record these talks and upload them to Google Video. You can get a list of most of them by searching video.google.com for “engEDU”. Think of these as virtual user group talks, but usually with bigger “name” speakers than a typical local group offers.

Here are just a few that’s I’ve enjoyed recently, there are many more worth watching.

How Debian (Ubuntu) packages work

Seth Godin (marketing guru)

Mary Poppendieck (“Lean Software Development” author) – Competing on the basis of speed

A new Way to look at Networking – Fascinating

The Mercurial distributed source control system

Added later:

Fission is the New Fire

Jessica Livingston, talking about “Founders at Work”

Still later:

Subscribe to this feed to find out about each talk

I have seen the future, and it runs OSX

iPhone: Wow

It’s a phone. It’s a PDA. It’s an iPod. It’s a widescreen video iPod. It has zero physical buttons, rather the whole front is a multi-touch-screen. I’ll leave the rest of the raving to the many other sites doing a great job of that.

The real innovation of this new device is the OS. Apple has an answer to PalmOS and Windows Mobile (CE): run their real desktop/server OSX, with Unix inside, on the phone. Today’s handhelds have much more computing power and storage than desktop PCs of a decade ago, and there are enormous benefits to running a real, common OS on such a device. I’d been saying since I bought my first Palm (a Handspring, actually) than within a decade the handheld OS’s would go away.

Apple has gone first. How soon will Microsoft follow? Will Palm and RIM make to the new era at all?

(Update: Yes, the title of this post is slightly in jest. I’m serious about real OSs in handheld devices, and the iPhone looks fantastic, but Apple is very unlikely to dominate the phone market in the way the iPod dominates the tiny-media-player market.)

YouTube’s 45 Terabytes… no big deal?

Over at the Wall Street Journal and Micro Persuasion and Computers.net and a bunch of other places, a big deal is being made of the YouTube’s estimated 45 Terabytes worth of video. It is “about 5,000 home computers’ worth”. Ouch, 45 Terabytes! Wow!

Or maybe not… consider the mathematics.

45 TB really isn’t all that much data. I’ll assume that each video is stored on 6 hard drives across their systems, for reliability and greater bandwidth, for a total of ~300 TB of hard drives. A 300 GB hard drive costs under $200, and ~1000 will be needed, so this is about $200,000 worth of hard drives, which is not a big deal for a major venture-funded firm like YouTube. (Side note – if you need a lot of read-mostly disk bandwidth, you are much better off buying several cheap hard drives with the same data on them, than one expensive hard drive. It’s not even close.)

The 1000 hard drives might be spread across 250 servers. If their systems is build in the economically smart way (the Google way – lots of commodity servers), each server could cost as little as $3000. Those servers could likely serve the traffic also, as well as (at a lower priority) do any needed video transcoding of newly uploaded video. After all, it is mostly static content, and it’s likely that a small fraction of the videos are a large fraction of the views, so the popular ones stay in RAM cache. Adding other machines for various purposes, network hardware, etc., a YouTube-scale app/storage cluster might cost as little as $2 million, of which a relative small portion (above) is hard drives.

Of course I’ve totally skipped the question of paying for the bandwidth (and power), which must be staggeringly expensive.

Intel Integrated Performance Primitives Bewilderment

I’ve been evaluating Intel’s Integrated Performance Primitives, a set of libraries for low-level signal processing and image manipulation. These appear to be very well engineered at the lowest levels, but the packaging is a mess, as I’ll describe below. The point of these libraries is that they use the various enhanced CPU instruction sets (MMX, SSE, SSE2, SSE3, are there more?) of the various Intel processors. Such enhancement are very useful if you are processing a lot of images in a hurry… such as in a video stream.

Big gripe: nearly all of the CPU enhancements that these libraries use, are also available on AMD hardware; but being Intel libraries, these don’t auto-detect and use these things on AMD CPUs. Therefore an off-the-shelf, simple benchmark will show AMD radically slower. Apparently with the right tweaks, the AMD stuff can be used well also, and works fine. Now I can see why Intel would do this, but it has the side effect of making them look scared. Looking scared is not good marketing. I believe it would have been better for them to support AMD seamlessly, and let the benchmarks show that the Intel stuff is a bit faster… if indeed that is what they show.

Now for the bewildering deployment model: These libraries arrive as a large pile of DLLs, with most of them in several flavors, once for each processor family.  But rather than putting the “which logic to load” code in a DLL, that code is supplied only in a .lib file – so it’s quite amenable to use from C code, and much less so from non-C code.

Next bewilderment: To the “rest of us”, getting a machine with a 64-bit processor (Intel EM64T, AMD64) is no big deal, it’s just another CPU family.  But to Intel the IPP developers, it’s a separate product version, with a separate way to invoke it, and no clear example code to show how to auto-load the right libraries (and of the 32-bit, any of the 64-bit) as needed depending on CPU type.  Apparently it did not occur to them that this could have been seamless.

Next bewilderment: Until a few years ago, the “Intel JPEG Library” was a popular, free, and very fast JPEG compression / decompression library.  It’s not free anymore, it is now part of IPP.  I have no objection to that, IPP is a very good value at $200 per developer.  The bewilderment is that rather than supply the working “Intel JPEG Library” as part of IPP, they instead supply it only as example code.  The deveoper than compiles the sample code to get an IGL DLL that runs on top of IPP, with all of the deployment issues mentioned above.  Again, this could have been trivial, and would meet a lot of developers’ needs, but is instead a mess of accidental complexity.  IPP provides a large number of low-level functions useful to write a JPEG compressor / decompression, but a lot of us (myself and this fellow, for example) don’t want to learn the inner working of JPEG; it seems as though IPP could benefit greatly from some higher level API calls.