In the Arena

Almost every day at some point I wander over to Hacker News, which has some great discussion, along with some less great discussion, among people pursuing or aspiring to pursue a software startup or similar business. Likewise with local events (like ITEN STL offers), and even more so the Business of Software conference earlier this month. (experiences)

I used to have a software product business myself, a vertical market SaaS firm. Now that I’ve been out of that for over a year, the thing I miss most is the feeling of being “in the arena”, of having a speculative product out there for people to buy. To be out there is both terrifying and exhilarating. I have heard it said that there are “product people” and “consulting people”, and looking back it is clear to me that I am mostly in the Product category.

Unlike some product people (like Amy Hoy, whom I admire greatly!) I don’t think it’s necessary to swear off one thing to do the other. Consulting (building software for clients) is very satisfying, especially when working with a team of great people (and a group of very competent customers) like we have at Oasis Digital.

So while I’m going to keep building software for other people, I’m also going to go back to the marketplace with speculative products. This time it will be products in the plural, some subset of:

  • Web/SaaS software
  • iPad software
  • iPhone / iPod Touch software
  • Android software (by year-end the stores will be piled high with Android tablets)
  • Or possibly HTML5/etc software to address all of the above
  • Backend / data / system management software
  • Or even, possibly, locally installed desktop software

I apologize for the vagueness of this list; but I agree with Derek Sivers about keeping one’s specific goals to oneself so my voluminous and tedious notes on exactly what products to offer, will remain offline.

October 2010: Business of Software, Strange Loop, Clojure Conj

I attended three conferences in October 2010, the most of any month of my life to date. Others have posted extensively about all three events, so I’ll link to a few posts and point out highlights for me.

Business of Software 2010

BoS alternates between San Franscisco and Boston; this year it was in Boston. There are plenty of excellent summaries online (here, here, here, here), and an especially nice set of photos here.

The conference was packed full of great speakers, mostly well known. I am sure the most “expensive” person in the lineup was Seth Godin; he is an excellent speaker and had interesting content, but wasn’t as relevant to me as some of the others.

The high point of BoS was Joel Spolsky’s closing talk. Unlike everyone else, he used no slides, and simply sat at a table to tell us the story of his last year or so. I was a bit surprised at his public airing of partner grievances, but that was probably necessary to tell the (very worthwhile) story of his transition over the last year from the “small, profitable company” model to the “go big” model. The former can make good money; but only the latter can make a broad impact to build a (perhaps slightly) better world.

I also especially enjoyed Erik Sink and Derek Sivers telling the stories of their company sales. My own company sale experience was more like Erik Sink’s.

In the past, Business of Software has posted the videos for year N during the marketing runup for year N+1; I suspect the same will happen this time. When those videos appear, watch them. Especially keep an eye out for Joel’s criticism of Craigslist, with which I agree.

Strange Loop 2010

Strange Loop is held in, and named after, the Delmar Loop area which spans University City and a bit of St. Louis. The 2010 event was much larger than the 2009 event; I don’t know whether it will be possile to accomodate 2011’s crowd in the Loop area or not; I’ll certainly attend either way.

Again there are plenty of summaries online, including here and here.

The highlight of this event for me was Guy Steele’s talk on parallelism. Unlike some commenters, I greatly enjoyed both the first half of the talk (a stroll through some ancient IBM assembly code) and the second half (including the Fortress example code). I’ve been inspired by this talk and criticism about it to put together my own upcoming code-centric talks, in which I’ll touch on the key parallelism ideas briefly, then step through several code examples in various languages.

I also spoke at Strange Loop, in a 20 minute slot, on Lua (video). Most of the feedback on my talk was positive, particularly of the “why, not how” approach I used to make the best use of 20 minutes. A few people would have preferred a longer talk with more “how”; I might put together such a presentation at a later date.

Disclosure: Oasis Digital sponsored Strange Loop.

(first clojure-conj)

At Clojure Conj I had the strong impression of being at the start of something big. I believe that Clojure, in spite of the needlessly-feared parentheses, has more “legs” than any other of the current crop of ascendant languages: getting state right (and thus making it possible to get parallelism right) is more important than syntax. Based on the folks I met at the Conj, I’d say Clojure has exactly the right early adopters on board.

As usual plenty of others have posted detailed notes (here, here, here, here, here).

The talk that stands out most to me was not exactly about Clojure. Rich Hickey’s keynote was about the importance and process of thinking deeply about problems to create a solution. In a sense this is the counterpoint to agile, rapid-iteration development, suitable to a different class of problems. Clojure exudes a sense of having been thought about in depth, and Rich is obviously the #1 deep thinker. When this arrives on video, watch it. Twice.

I also enjoyed Rich’s impromptu Go clinic at the pre-conference speaker (and sponsor) dinner. Note that Go has totally different rules from the similarly named Go-Moku, and is not to be confused with Google’s Go language.

Disclosure: Oasis Digital sponsored Clojure Conj.

Back to Work

I’ve had very little time for my own projects this month; between the events, most of my available hours were occupied with Oasis Digital customers. My mind is bursting with worthwhile ideas to pursue.

Map-Reduce in the Small: an Array of Talks

At Strange Loop 2010, Guy Steele gave a wide-ranging, excellent talk in which the key point was:

In essence, his notion is to use a divide-and-conquer approach, which he described as “map-reduce in the small” (or some similar phrase). This is analogous to techniques used to partition work in large distributed systems, but inside a single program.

I heartily agree with all of this. Massive multicore will be a dominant factor in software design in the coming decade. In 2010, most of us are happily waiting in a calm before a storm, because our multicore machines don’t have very many cores yet. For most applications, we get by with very coarse parallelism (such as one thread per concurrent user request being served, in an application server or web server). This won’t last – when cheap PCs have 50+ cores, most software will need to harness parallelism in a much more fine-grained way. Allocating only one core per concurrent operation will become ridiculous.

You can download Steele’s slides from the Strange Loop site, or watch this video of his previous talk at ICFP 2009 in which he covered some of the same material. At Strange Loop, Steele showed how to solve a particular problem (counting words in a string) in a manner amenable to parallel processing. His sample code was written in Fortress, a Sun-Oracle research language. Fortress didn’t bother me, but I heard some discussions about the language choice (and rapid presentation) as an obstacle to detailed understanding.

I propose to elaborate these ideas by walking through sample code, in a series of talks.

Talk Proposal: Map-Reduce in the Small

First, I will briefly summarize the need for fine-grained parallelism.

Then, I will present three code walkthrough examples in a widely used language.

  1. The word-count problem from Guy Steele’s Stange Loop talk.
  2. Another simple algorithmic / computer-science-flavored example.
  3. As time allows, a third example from a thoroughly practical, enterprise-app-flavored problem space.

This talk will use very few slides; instead it will be all about the code. Each example will show how (not just why) to parallelize algorithms.

An Array[0..N] of Talks

This idea is worthy of deep understanding and practice, so I have in mind several talks, each using a different language, and with sufficient differences to be interesting even for someone who happens to be present for all of them:

  1. Examples in Java, at the St. Louis Java user group
  2. Examples in Clojure, at the St. Louis Clojure Lunch Cljub (there are already some good Clojure examples out there, making this one easy, but still worthwhile).
  3. Examples in another language (perhaps more esoteric, to be determined later), at Lambda Lounge
  4. A repeat of one of those languages, with updated examples, at Strange Loop 2011 (no web site yet!)

Putting these together will take a while, so I have in mind spreading these over the next year or so. Of course, it is quite possible that only a subset of these groups/events, possibly an empty set, will accept this talk offer. In that case, I’ll take it as a sign that the St. Louis community already understands micro-parallelism in depth, and celebrate!

Update: Schedule

As these talks are scheduled, I will put the information here:

  1. Java code at the St. Louis JUG: Jan 13, 2011.

Lua Doesn’t Suck – Strange Loop 2010 video

At Strange Loop 2010, I gave a 20 minute talk on Lua. The talk briefly covered six reasons (why, not how) to choose Lua for embedded scripting. Lua is safe, fast, simple, easily learned, and more popular that you might expect.

The Strange Loop crew only recorded video in the two largest venues (out of six), so I made a “bootleg” video of my talk, for your viewing pleasure:

video
play-sharp-fill

The video/audio sync starts out OK, but drifts off by a second or so by the end. The drift is minor, so it is reasonably viewable all the way through. If you don’t have Flash installed (and thus don’t see the video above), you can download the video (x264); it plays well on most platforms (including an iPad).

The slides are available for PDF download.


Video Hackery

This video recording was an experiment: instead of hiring a video crew (with professional equipment), or using my DV camcorder, I instead used the video recording capability of my family’s consumer-grade Canon digicam. This device has three advantages over my DV camcorder:

  1. No tape machinery; no motors; thus no motor noise in the audio.
  2. Smaller size, easier to carry in and out.
  3. Directly produces a video file, easily copied off its SD card.

As you can see from the results, the video quality is adequate but not great. Still, I learned that if I want to increase the quality of recording, the first step is not to use a better camera or lens! Rather, it is to bring (or persuade the venue to provide) better light. For good video results, the key is light the speaker well, without shining any extra light on the projector screen. With that in place, a better camera make sense.

The audio was a different story. Like nearly all consumer video cameras (and digicams with video), mine doesn’t have an external audio input, so the audio (from ~12 feet away) was awful. As a backup I had used a $75 audio recorder and a $30 lapel microphone, and that audio is very good, certainly worth using instead of the video recording audio track.

To combine the video in file A with the audio in file B, I used the ffmpeg invocation below. I reached the time adjustments below in just a few iterations of trial and error, by watching the drafts in VLC, using “f” and “g” to experiment with the audio/video time sync. I also trimmed off a bit of the bottom of the video, and used “mp4creator.exe -optimize”, which I had handy on a Windows machine, to prepare the file for progressive download viewing.

ffmpeg -y -ss 34.0 -i WS_10001.WMA -ss 34.0 -itsoffset -12.05 -i MVI_4285.AVI -shortest -t 8000 -vcodec libx264 -vpre normal -cropbottom 120 -b 400k -threads 2 -async 200 Cordes-2010-StrangeLoop-Lua.m4v

The remaining bits of technology are FlowPlayer, a WordPress FlowPlayer plugin, and a CDN.

Apple is Building a Bigger Footprint

I’ve seen a lot of people writing (whining?) about being unimpressed by some of the new Apple products/features announced at their event this week. These folks are missing Apple’s strategy. Several of Apple’s new gizmos are laying a foundation and pointing down an obvious path for their products:

iTunes-Ping Social Network

It doesn’t integrate with other social networks, and the initial content on there (from what I’ve heard, I didn’t even bother to look) is weak, consisting mostly of some well-known musicians who (shockingly!) think you should buy their music. None of this matters. If Ping v1 gets Apple some incremental sales in the iTunes store, it’s a win. Apple has an enormous audience, so inevitably Ping will get some level of community, and with that in place Apple can bring out a v2 in which they add integration, support in all of their devices, etc.

Apple TV

The 720 resolution is a disappointment, and I find the lack of a digital audio output rather surprising. No apps. It doesn’t compare well on technical features, with some similar products already out there.

But it’s a whizbang, very friendly Apple device $99! It will probably sell in big numbers, to many millions of consumers who haven’t (and won’t ever) hear of the similar products from lesser-known firms. Once there is a big user base out there, Apple can announce Apps and other features for Apple TV, with enormous fanfare and day-1 sales volume.

iPod Nano with multi-touch

This doesn’t have apps yet either; and similarly, the foundation is clearly there. Apple can add some form of Nano Apps in the future, again with an installed base already in place ready to start buying apps (in volume) on day 1.

iPad Multitasking

As a power user and developer, I find it a bit annoying that the iPad, a powerful computer launched in 2010, didn’t ship with multitasking. Yet I’m very happy with the device anyway, and it didn’t stop Apple from selling a billion dollars’ worth of them in a few months. Adding multitasking for free in November is a nice (small) step forward, plenty good enough for a mid-cycle update.

Winning Big

Apple isn’t merely trying to win today. They are trying to win big.

Sorry, Not a Fanboy

But am I a fan of all this?

  • As a user, yes. I enjoy Apple’s products. I am typing this on a MacBook Pro, with an iPad sitting nearby.
  • As a developer, I am not a fan of the increasingly closed ecosystem they are building. It reminds me ominously of the first generation of home computers, and current gaming systems, where the manufacturer interposes themselves in every financial transaction forever. That approach lost to an open market back in the 1980, I hope it does not take over the PC world ever again.
  • In addition, the closed ecosystem excludes many possible applications that would be very useful to our customers.
  • As a business aficionado, I nod approvingly at their strategy and execution prowess.

Update: I’m not the only one who thinks Ping will make Apple a lot of money, regardless of its merit.

Refactoring some Factor code

Most of the software I work with is very practical. At Oasis Digital we mostly create line-of-business enterprise software, and even when I step away from that, I usually pick up a tool or language that has a good likelihood of mainsteam adoption.

Sometimes, though, I like to really stretch my mind. For that, it’s hard to beat Factor. Factor is fascinating in that it combines a goal of efficiency and practicality, with a syntax and computation model which are quite alien even to a software polyglot. Don’t let the stack-ness deceive you; it’s a big leap even if you’ve used FORTH and grown up with a HP RPN programmable calculator.

So I set about this evening to work with some Factor code, a simple GUI calculator posted a few days ago by John Benediktsson. I bit off an apparently small bit of work: remove the “code smell” of that global variable, and in the process, make it so multiple calcs each have their own model (rather than a global shared state).

Original version from John

My finished version

The two most important pieces of updated code:

The changes consist approximately of:

  • Change all the button words to accept a model input
  • Change the <row> word to accept a model and use map instead of output>array
  • Remove the calc variable
  • Change the calc-ui word to shuffle things around and use make rather than output>array

In case it isn’t obvious from my text above or the source code, I am not a Factor programmer, please do not use this as example code. On the other, I learned a bunch of little things about Factor, and perhaps implicitly about concatenative programming in general, in the process of making this work.