Rhino + JavaScript + Swing, Look Ma No Java

A while back I was discussing the future of programming languages with a colleague, and we agreed that for all its foibles, JavaScript will continue to enjoy very wide and increasing use in the coming years. I wrote last year about Steve Yegge’s hints that JavaScript is the “next big languages”, see that post for the reasoning.

Based on all that, I set about writing a small test app to see what it’s like to program a Swing app with JS.  After a day or so of work (spread over a few months), I offer my Rhino Swing Test App:

Run RSTA now via Java Web Start

Get the RSTA code (git, on GitHub)

It implements the same “flying boxes” animation demo that I presented a few years ago at the St. Louis JUG, but aside from a generic launcher class, the GUI is implemented entirely in JavaScript. To clarify, this is not web browser JavaScript; it is running in Rhino, in the JVM, using Swing classes.

The documentation for interaction between Java and JS is limited, but sufficient. For simplicity, I used Rhino as an interpreter, I did not compile to java bytecode. Nonetheless, the animation runs about as smoothly in JS as it does in Java, because the heavy lifting is done by the JDK classes.
I used Eclipse (with JavaScript support) to write this code, but of course JS makes much less code completion possible than Java, and I missed that. Typically I mildly prefer IDEs, but am also productive with a text editor. For working with a large API like Swing though, IDE support helps greatly.

Still, I recommend a look at this approach if you are fond of dynamic languages but need to build on the Java platform, and I intend to investigate server-side JS development also.

Network / System Monitoring Smorgasbord

At one of my firms (a Software as a Service provider), we have a Zabbix installation in place to monitor our piles of mostly Linux servers. Recently we look a closer look at it and and found ample opportunities to monitor more aspects, of more machines and device, more thoroughly. The prospect of increased investment in monitoring led me to look around at the various tools available.

The striking thing about network monitoring tools is that there are so many from which to choose. Wikipedia offers a good list, and the comments on a Rich Lafferty blog post include a short introduction from several of the players. (Update – Jane Curry offers a long and detailed analysis of network / system monitoring and some of these tools (PDF).)

For OS level monitoring (CPU load, disk wait time, # of processes waiting for disk, etc.), Linux exposes extensive information with “top”, “vmstat”, “iostat”, etc. I was disappointed to not find any of these tools conveniently presenting / aggregating / graphing the data therein. From my short look, some of the tools offer small subsets of that data; for details, they offer the ability for me to go in and figure out myself what data I want in and how to get it. Thanks.

Network monitoring is a strange marketplace; many of the players have a very similar open source business model, something close to this:

  • core app is open source
  • low tier commercial offering with just a few closed source addons, and support
  • high tier commercial offering with more closed source addons, and more support

I wonder if any of them are making any money.

Some of these tools are agent-based, others are agent-less. I have not worked with network monitoring in enough depth to offer an informed opinion on which design is better; however, I have worked with network equipment enough to know that it’s silly not to leverage SNMP.
I spent yesterday looking around at some of the products on the Wikipedia list, in varying levels of depth. Here I offer first impressions and comments; please don’t expect this to be comprehensive, nor in any particular order.

Zabbix

Our old installation is Zabbix 1.4; I test-drove Zabbix 1.6 (advertised on the Zabbix site as “New look, New touch, New features”. The look seemed very similar to 1.4, but the new feature list is nice.

We most run Ubuntu 8.04, which offers a package for Zabbix 1.4. Happily, 8.04 packages for Zabbix 1.6 are available at http://oss.travelping.com/trac.

The Zabbix agent is delightfully small and lightweight, easily installing with a Ubuntu package. In its one configuration file, you can tell it how to retrieve additional kinds of data. It also offers a “sender”, a very small executable that transmits a piece of application-provided data to your Zabbix server.

I am reasonably happy with Zabbix’s capabilities, but I have the GUI design to be pretty weak, with lots of clicking to get through each bit of configuration. I built far better GUIs in the mid-90s with far inferior tools to what we have today.  Don’t take this as an attack on Zabbix in particular though; I have the same complaint about most of the other tools here.

We run PostgreSQL; Zabbix doesn’t offer any PG monitoring in the box, but I was able to follow the tips at http://www.zabbix.com/wiki/doku.php?id=howto:postgresql and get it running. This monitoring described there is quite high-level and unimpressive, though.

Hyperic

I was favorably impressed by the Hyperic server installation, which got two very important things right:

  1. It included its own PostgreSQL 8.2, in its own directory, which it used in a way that did not interfere with my existing PG on the machine.
  2. It needed a setting changed (shmmax), which can only be adjusted by root. Most companies faced with this need would simply insist the installer run as root. Hyperic instead emitted a short script file to make the change, and asked me to run that script as root. This greatly increased my inclination to trust Hyperic.

Compared to Zabbix, the Hyperic agent is very large: a 50 MB tar file, which expands out to 100 MB and includes a JRE. Hyperic’s web site says “The agent’s implementation is designed to have a compact memory and CPU utilization footprint”, a description so silly that it undoes the trust built up above. It would be more honest and useful of them to describe their agent as very featureful and therefore relatively large, while providing some statistics to (hopefully) show that even its largish footprint is not significant on most modern servers.

Setting all that aside, I found Hyperic effective out-of-the-box, with useful auto-discovery of services (such as specific disk volumes and software packages) worth monitoring, it is far ahead of Zabbix in this regard.

For PostgreSQL, Hyperic shows limited data. It offers table and index level data for PG up through 8.3, though I was unable to get this to work, and had to rely on the documentation instead for evaluation. This is more impressive at first glance than what Zabbix offers, but is still nowhere near sufficiently good for a substantial production database system.

Ganglia

Unlike the other tools here, Ganglia comes from the world of high-performance cluster computing. It is nonetheless apparently quite suitable nowadays for typical pile of servers. Ganglia aims to efficiently gather extensive, high-rate data from many PCs, using efficient on-the-wire data representation (XDR) and networking (UDP, including multicast). While the other tools typically gather data at increments of once per minute, per 5 minutes, per 10 minutes, Ganglia is comfortable gathering many data points, for many servers, every second.

The Ganglia packages available in Ubuntu 8.04 are quite obsolete, but there are useful instructions here to help with a manual install.

Nagios

I used Nagios briefly a long time ago, but I wasn’t involved in the configuration. As I read about all these tools, I see many comments about the complexity of configuring Nagios, and I get the general impression that it is drifting in to history. However, I also get the impression that its community is vast, with Nagios-compatible data gathering tools for any imaginable purpose.

Others

Zenoss

Groundwork

Munin

Cacti

How Many Monitoring Systems Does One Company Need?

It is tempting to use more than one monitoring system, to quickly get the logical union of their good features. I don’t recommend this, though; it takes a lot of work and discipline to set up and operate a monitoring system well, and dividing your energy across more than one system will likely lead to poor use of all of them.

On the contrary, there is enormous benefit to integrated, comprehensive monitoring, so much so that it makes sense to me to replace application-specific monitors with data feeds in to an integrated system. For example, in our project we might discard some code that populates RRD files with history information and published graphs, and instead feed this data in to a central monitoring system, using its off-the-shelf features for storage and graphing.

A flip side of the above is that as far as I can tell, none of these systems offers detailed DBA-grade database performance monitoring. For our PostgreSQL systems, something like pgFouine is worth a look.

Conclusion

I plan to keep looking and learning, especially about Zenoss and Ganglia. For the moment though, our existing Zabbix, upgraded to the current version, seems like a reasonable choice.

Comments are welcome, in particular from anyone who can offer comparative information based on substantial experience with more than one of these tools.

Ease of Installation: DokuWiki, PHP, files

In the past I’ve installed MediaWiki, ruwiki, git-wiki, and several other Wiki implementations (Perl, and Java implementations), with varying degrees of effort. For example, ruwiki required considerable gymnastics to get the right Ruby libraries in place on the machine I hosted it on, MediaWiki required a database, etc. Ruby libraries, databases, JVMs, and the like are all at the top of my toolbox so in most cases it’s just a few minutes and a few commands, which seems amply easy until compared wtih…

Yesterday I set up a DokuWiki instance (which stlruby.org might migrate to), and found that its underpinnings (PHP, plain text files) make for ridiculously easy installation:

  • wget
  • tar xzf
  • browse to install page, set a few settings
  • delete install page

Yet those underpinnings are very well suited to the task at hand. A typical Wiki does not need a database underneath it. As with many things, this reminded me of a general principle: use the least complex, most readily and commonly available, easiest to administrate technology appropriate for the task at hand.

Webby – Client-side, static content management system

This afternoon I rebuilt OasisDigital.com using Webby, stripping out hand-coded HTML and replacing it with much more maintainable Markdown. The site looks about the same as before (which is to say, mediocre), but under the hood it is much easier to update. We intend to use this new ease, to move forward in improving it. There is a general principle here, which applies broading in software development also:

If you need to make a change, but that change is difficult / tedious / risky to make, first improve the underlying system that makes it so.
(OasisDigital.com is a static web site; we have dynamic contact (issue trackers, etc.) to automate our work together and with our customer, but that content is on another domain.)

Webby is a client-side, simple CMS for generating static web sites, written in Ruby. Why serve a static site (plain old files on a web server) in 2008?

  • It minimizes the moving parts, there is almost nothing to break or maintain.
  • It is very unlikely that any hosting issue will break a static site.
  • It is easy to serve a static site fast (though our current host, TextDrive, sometimes is not all that fast).
  • Security vulnerabilities are very unlikely, in the absence of any executable content.
  • The canonical content (in this case, mostly Markdown) is stored in plain text files, which we track, diff, and merge in git.

In an earlier foray in to Drupal, we found that Drupal has extensive and useful capabilities, as well as a vibrant community, but it also has many moving parts; too many, in my judgment, to make it a good solution for building an essentially static web site.

TDD, Still in Style?

Test Driven Development doesn’t seem to be in style today to the extent it was a few years ago. I think that’s a shame, because TDD is among the most powerful ideas I’ve come across to boost the quality of the software we deliver. At Oasis Digital we even use it in unusual places, like this Palm application we develop for a customer:

This application forms the core of our customer’s business; they sell it at a commercial product to their customers, who use it in high-pressure situations. Over 4 major versions and 4+ years, there been remarkably few bugs or other issues found “in the field”, which I credit to:

  • Use of automated testing
  • An overall level of care and attention to quality

The second item alone is not enough. To achieve quality you need to do more than have the right intentions and work hard. You need to do the right things.

Rearranging files in SVN? Use git-svn instead.

From time to time I need to rearrange a set of files in a project, typically while revamping the file / directory layout of a set of source files. The most direct way to do so is by dragging the files around (using a Windows or Linux GUI), but this is quite tedious to do if the files are in a Subversion repo, because SVN does not understand the file moves and renames unless you use “svn mv” or (with TortoiseSVN) right-click-drag the files to new locations.

The answer to all this is simple: use git-svn for your SVN checkout, instead of the SVN command lines tools or Tortoise. You can readily Google for extensive git-svn information to get started.

The workflow with git-svn for file rearrangement is fairly simple, and very fast:

  • “git svn rebase” to get your local git repo up to date with your SVN repo.
  • make new directories, rearrange files, rename files, etc., using any tool you like. This is easy and painless because git (wisely) keeps all the metadata in a top-level .git directory. It is fast because you aren’t running any svn code (or any git code) while doing so.
  • use “git add”, or the git GUI, to add the newly moved / renamed files.
  • “git commit”
  • “git svn dcommit” to push the changes over to SVN.

git / git-svn looks at the content of files, notices (rather than you needing to tell it) that the content has been moved to different filenames in different directories, and generates SVN move/copy operations. It even notices nearly-identical (edited) files and handles them correctly.

One loose end is left hanging, which is that git-svn does not remove any now-empty or removed directories from SVN, a consequence of git not caring about directories. Therefore, if you need the empty directories removed from the SVN repo, use a normal SVN client (command line, Tortoise) to do so.

From the description of above, this does not initially sound like a big win; but with hundred of files to move around, the time for the git commands (instant, except for the first and last) is irrelevant, and the time/hassle saved is enormous.

Perhaps someday SVN itself will gain the ability to “just work” when you move or rename files / directories, rather than having to be told.