What is the Best Git GUI (Client) for Windows?

I adopted Git as my primary source control tool a couple of years ago, when I was using Windows as my primary (90%) desktop OS. Since then I’ve switched to 75% Mac OSX, but I still use Git on Windows for a few projects, and I get a lot of questions about Git on Windows.

I use msysgit (and its included GUI) most often myself, but I don’t have a clear answer as to which is the “best” Git GUI for Windows. I can offer this list of choices, though, along with some thoughts about them.

There is also a very long list of Git tools on the main Git wiki; but that page is just a list, without any other information.

msysgit

msysgit is the main project which ships a Windows port of Git. It is based on MSYS, so it fits in the Windows ecosystem a bit better than the cygwin Git port.

msysgit includes the same Tk-based GUI tools as Git on Linux: a commit tool and a repo-browse tool, plus a bit of shell integration to active the GUI by right-clicking in Windows Explorer, plus a new thing call git-cheetah, which appears to be heading toward Tortoise-style integration. These tools are a bit ugly, but have good and useful functionality. I don’t mind the ugly (I get my fix of stylish software over on my Mac…), and I find the features ample for most work.

If you don’t know where to start, or if you want a Linux-like Git experience, start with msysgit and learn to use its tools.

msysgit is free open source software. It is under active development, and keeps up with the upstream Git versions reasonably well. There is even a portable (zero-install) version available.

My biggest gripe with msysgit (and its GUI) is that I had to figure out how to use it effectively myself. I could have really used a video walkthrough of how to be productive with it, back when I was starting out. That was a long time ago for me, but might be Right Now for people reading this post. Mike Rowe (a reader) helpfully suggested this msysgit tour, which is very helpful though a bit dated.

TortoiseGit

This is an attempt to port TortoiseSVN to git, yielding TortoiseGit. If you like and use TortoiseSVN, you’ll probably find this worth a try. I haven’t tried it yet myself.

TortoiseGit is free open source software, and is under active development.

Git Extensions

This Git GUI has a shell extension (like the Tortoise family) and also a plugin for Visual Studio. From the screen shots, it appears to be feature-rich and complete.

Git Extensions is free open source software, and is under active development.

SmartGit

Unlike the other tools listed here, SmartGit is a commercial product (from a German company), starting at around $70. It appears to be more polished than the others, as is often the case with commercial products. It also appears to be quite feature-rich.

I don’t know how SmartGit fits in with the Git licensing; Git is licensed GPL (v2), so I assume (hope?) SmartGit has found some way to use it under the hood without linking to it in a way that would cause license trouble.

SmartGit requires a Java runtime, implying that it is written in Java. Five year ago I thought of that as a caveat; but today, Java-based GUIs can be extremely attractive and fast, so I don’t see as a problem at all.

Is IDE Integration Vital?

I know people who swear by their IDE experience, and are aghast at the thought of any daily-use dev tool that is not integrated with their IDE. It is almost as though for this group, multitasking does not exist, and any need to run more than one piece of software at the same time is a defect.

Now I love a good IDE as much as anyone (I’ve urged and coached many developers to move from an editor to an IDE), but I don’t agree with the notion that source control must always be in the IDE. IDE-integrated source control can be very useful, but there are sometimes cases where non-integrated source control wins.

The most common example for me is when using Eclipse on a large, complex system. There are two annoyances I see regularly:

  • Eclipse assumes that one Eclipse project is one source control project, an assumption that is sometimes helpful and sometimes painful. In the latter case, simply ditch the Eclipse integration, and use a whole workspace (N projects) as a single source-control project, outside of Eclipse.
  • Sometimes Eclipse source control integration bogs down performance. Turn it off, and things speed up.

Therefore, when I use Eclipse, I sometimes manage the files from outside, using msysgit, command line, etc. When I have a complex “real-life project” comprised of many Eclipse “projects”, I set up a separate Eclipse workspace for it, apart from other unrelated Eclipse projects.

Feedback Wanted

I’d love to hear about:

  • More Windows Git GUIs to list here
  • Anything else I’ve missed

.. via the contact page (link at the top of the page). I try to reply to all email within a few days.

Write your whole stack in JavaScript with Node.JS

Node is a combination of Google’s V8 JavaScript implementation, and various plumbing and libraries. The result is an unusual and clever server programming platform. Node is in a fairly early development phase, and already has a remarkably active community: ~9000 mailing list messages (as of June 2010) and many dozens of projects and libraries. I’ve spent some time digging through Node code and writing small bits of it, and was pleased with what I found.

Why is Node worthy of attention?

  • JavaScript is a Next Big Language, it is everywhere. It is probably the most widely used programming language ever.
  • I know a few things about asyncronous server programming, having done a lot of it in 1990s IVR software; it is very well suited to serving a large user population.
  • Node is accumulating libraries at an impressive rate, indicating momentum.
  • There are significant advantages in developing a whole application stack (server and client code) in a single language. For example, this makes code and business logic sharing works across tiers. Using Node, a JavaScript-HTML tool, a JavaScript-CSS tool, JSON, etc., it is possible to develop a complex web application using only JavaScript.

Node is not all unicorns and roses though.; my most serious misgiving about it is that it does not (yet) have a great strategy to make straightforward use of many-core servers. We’ll have to see how that develops over time.

Node Knockout

The team at Fortnight Labs is putting together Node Knockout, a 48-hour Node programming contest. I am a fan of such contests. I’ve offered to help out by being a judge, and I’ve also signed up Oasis Digital as a sponsor.

As a judge, I can’t be on a team; I’ve like to see a team or two form here in St. Louis, though.

I’m Dreaming of a Better Social Media Client

I’m not a big social media guy. I’m certaintly not a social media consultant, nor a maven. I never used MySpace at all, and I was not among the first to use Facebook, LinkedIn, Twitter, etc. But I do find all of those useful to keep in touch with a bunch of people using all of the above, and I’ve grown quite frustrated with the sorry state of the client applications I’ve tried. Even those whose features work well and look good, don’t really go after the core problem we all either have it or will hit: information overload.

Here is what I really want in a social media client application for “power users” who receive a lot on their feeds: follow a lot of people on Twitter, have a lot of friends on Facebook, 500+ on LinkedIn, etc. Today, these are power users. Over the next few of years, this will be “everybody”. Most of these features make a lot of sense for a business managing its presense.

Table Stakes – The Basics

Support the Big Three (Twitter, LinkedIn, Facebook)

… and hopefully several more. But don’t even come to the party without the big three. I’m looking at you, Twitterific on the iPad, which I otherwise enjoy (and use every day, and paid for).

Ideally, RSS feeds would also flow in, and perhaps email and SMS too. But I don’t want this to be a “unified inbox” to replace an email client; this information would appear here as context for smart reading.

Run On Many Platforms

Mac, PC, iPhone, iPad, Android, Linux, maybe even BlackBerry. It’s not necessary to start with all these, but the target should to end up with all of them and more, with the core features present everywhere. I’m not looking for crappy ports though. Native, good citizens.

Keep Track of What I’ve Seen

Keep track of what I’ve seen, automatically. Don’t show me again unless I ask. But the act of closing the app should be meaningless, in that it should not mark all data as seen. An example of what not to do is TweetDeck, which has various settings for this, of which I can’t find any combination that does the Right Thing.

Next, the less common ideas:

I Paid for a Lot of Pixels – Use Them

Single-column feed display GUIs? Great idea for a phone. Silly on a PC.

Like most PC users, I have a wide, high resolution screen. Like many power users, I have two screens on some computers. I payed good money for all these pixels because I want to use them. Therefore, when I’m trying to catch up with all these data/tweet/etc. feeds, I want software that makes good use of those pixels. Show me a rich, dense screenful of information at one. Make it look like a stock trader’s screen (or screens).

Our Eyes are All Different – Give Me Knobs

I don’t want extensive customization. I don’t want a whole slew of adjustments. I don’t want a Preferences dialog with 82 tabs. I don’t even want themes. I want a good, clean, default design… but with a few well-considered knobs. Perhaps something like so:

  • font/size knob – because my eyes might work a bit better or worse than yours, and my screen might be higher or lower resolution than yours.
  • information density knob – because sometimes I want to admire a beautiful well-spaced layout, and something I just want to pack more information on there.

Aggregate Across Networks

Many of the people I follow, post the same data to at least three social media outlets; then a bunch of other copy/paste or retweet it. Please stop showing me all that duplication!

Instead, aggregate it all together, like Google News does for news sites. Show me each core message once, and then show a (dense, appropriate) display of who/how the information came in. Include a sparkline and other charts to show the continued re-arrival of that same data. This way, I won’t have to endure the duplication directly, and I can more clearly see how information traverses the (underlying, human) social network.

Some Tweets are More Equal than Others

In an ideal world, every Facebook update, every Tweet, would be a precious flower, to be admired in depth. We don’t live there. Instead, there is a lot of noise; an example fresh in my mind as I write this is the TV show Lost. It may be a great show, but it’s not one I watch, so to me all the Lost chatter is noise. I’ve probably scanned/scrolled past a couple hundred of them (some of them duplicates) over the last few days.

Therefore, a good social media client will make it trivial (one click) for me to tell it which bits I am interested in and which I’m not. I’m not talking about a scoring system, just a simple up/down arrow, for a total of three bins:

  • Important
  • Bulk / default
  • Junk

Apply some automatic classification mechanism (like the naive Bayensian that’s been common for several years now in email spam filtering) to learn from my votes and apply those to future data. By default, highlight the Important, show the Bulk, and hide the Junk.

I Have Several Devices – Sync Them Now

I might look at this river of news on my Mac in the morning, then on my iPad at lunch, then on my Linux netbook in the evening, then sneak an iPhone peek at bedtime. Keep all that “what I’ve seen” and “what’s important” data in sync across them. This means that my dream social media client needs a backend service behind it. It is not necessary for the data feeds to flow through the backend system (thought it might be useful); just the user’s attention metadata.


I believe that most or all of those features will be common in a few years. But I’m annoyed by the tsunami of social media feeds now. Is something like this out there? Where?

I could build such an application (with some help!). I’ve worked with APIs of all flavors. I’ve done mobile. I’ve created GUIs that elicit a “Wow”. I understand servers, and asynchronous operations, and scalability, and SaaS. But if I built it, would anyone *buy* it?

The Prolog Story

I’ve told this story in person dozens of times, it’s time to write it down and share it here. I’ve again experimentally recorded a video version (below), which you can view on a non-Flash device here.

The Prolog Story from Kyle Cordes on Vimeo.

I know a little Prolog, which I learned in college – just enough to be dangerous. Armed with that, and some vigorous just-in-time online learning, I used Prolog in a production system a few years ago, with great results. There are two stories about that woven together here; one about the technical reasons for choosing this particular tool, and the other about the business payoff for taking a road less travelled.

In 2004 (or so) I was working on a project for an Oasis Digital customer on a client/server application with SQL Server behind it. This application worked (and still works) very well for the customer, who remains quite happy with it. This is the kind of project where there is an endless series of enhancement and additions, some of them to attack a problem-of-the-moment and some of them to enrich and strengthen the overall application capabilities.

The customer approached us with a very unusual feature request – pardon my generic description here; I don’t want to accidentally reveal any of their business secrets. The feature was described to us declaratively, in terms of a few rules and a bunch of examples of those rules. The wrinkle is that these were not “forward” rules (if X, do Y). Rather, these rules describe scenarios, such that if those scenarios happen, then something else should happen. Moreover, the rules were are on complex transitive/recursive relationships, the sort of thing that SQL is not well suited for.

An initial analysis found that we would need to implement a complex depth/breadth search algorithm either in the client application or in SQL. This wasn’t a straightforward graph search, though, rather that part was just the tip of the iceberg. I’m not afraid of algorithmic programming, Oasis Digital is emphatically not an “OnClick-only” programming shop, so I dug in. After spending a couple of days attacking the problem this way, I concluded that this would be a substantial block of work, at least several person-months to get it working correctly and efficiently. That’s not a lot in the grand scheme of things, but for this particular customer, this would use up their reasonable-but-not-lavish budget for months, even ignoring their other feature needs.

We set this problem aside for a few days, and upon more though I realized that:

  • this would be a simple problem to describe in Prolog
  • the Prolog runtime would then solve the problem
  • the Prolog runtime would be responsible for doing it correctly and efficiently, i.e. our customer would not foot the bill to achieve those things.

We proceeded with the Prolog approach.

….

It actually took one day of work to get it working, integrated, and into testing, then a few hours a few weeks later to deploy it.

The implementation mechanism is pretty rough:

  • The rules (the fixed portions of the Prolog solution) are expressed in a prolog source file, a page or two in length.
  • A batch process runs every N minutes, on a server with spare capacity for this purpose.
  • The batch process executes a set of SQL queries (in stored procs), returning a total of tens or hundreds of thousands of rows of data. SQL is used to format that query output as Prolog terms. These stored procs are executed using SQL Server BCP, making it trivial to save the results in files.
  • The batch process run a Prolog interpreter, passing the data and rules (both are code, both are data) as input. This takes up to a few minutes.
  • The Prolog rules are set up, with considerable hackery, to emit the output data we needed in the form of CSV data. This output is directed to a file.
  • SQL Server BCP imports this output data back in to the production SQL Server database.
  • The result of the computation is thus available in SQL tables for the application to use.

This batch process is not an optimal design, but it has the advantage of being quick to implement, and robust in operation. The cycle time is very small compared to the business processes being controlled, so practically speaking it is 95% as good as a continuous calculation mechanism, at much less cost.

There are some great lessons here:

  • Declarative >>> Imperative. This is among the most important and broad guidelines to follow in system design.
  • Thinking Matters. We cut the cost/time of this implementation by 90% or more, not by coding more quickly, but by thinking more clearly. I am a fan of TDD and incremental design, but you’re quite unlikely to ever make it from a handcoded solution to this simply-add-Prolog solution that way.
  • The Right Tool for the Job. Learn a lot of them, don’t be the person who only has a hammer.
  • A big problem is a big opportunity. It is quite possible that another firm would not have been able to deliver the functionality our customer needed at a cost they could afford. This problem was an opportunity to win, both for us and for our customer.

That’s all for now; it’s time for LessConf.

The Much-Discussed Apple iPhone 4.0 beta TOS

As I write this in April 2010, Apple has just released a beta iPhone 4.0 SDK with the following rather alarming addition to the legal terms (at least according to many web sites; I haven’t seen it directly from Apple yet):

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

The is an avalanche of commentary out there on this, most of it negative, centered around the word “originally”. John Gruber, in his second comment about section 3.3.1, gives some reasons why this is sensible from Apple’s point of view. The essence is that it will lead to higher quality app, on average.

I will try to add a bit of insight from my experience. I am of two minds on this.

Why It Makes No Sense

From the point of view of using / creating the best tools, it strikes me as ridiculous. Great developers, or developers creating great things, generally use layered software designs to do so; and indeed layering, the increase in abstraction over time, is absolutely crucial to the software industry’s ability to make good use of ever-growing hardware capabilities. One obvious and vital kind of layering/abstraction is the use of higher level languages. Apple’s new insistence that only its provided languages/runtimes be used, ignores this fundamental insight.

More ominously, if this notion of “originally written” turns out to be enforceable. Imagine if earlier platforms had required that all software be written “originally” in the provided languages, with any attempt to build a higher level on top, banned; we could never have gotten to the amazing variety of technology available today, under such a regime. If such restrictions grow popular beyond Apple, that could lead our industry in to a kind of dark age.

I don’t think that’s likely though, and for that matter I think it is most likely that Apple won’t be able to make this new term stick. There are many successful apps already using a variety of languages, frameworks, and layered designs. Enforcing it as written, universally, would impact too many of those. Yet if Apple enforces it selectively (just to squash Flash, as many commentators suggest), they invite a long and painful legal mess. Speaking of that, such a legal mess would be an enormous windfall for Android.

Why It Makes a Bit of Sense

Setting aside the questions of whether it can possibly work, Apple’s notion does make a some sense, at least for the immediate future for this specific (iPhone etc.) platform. Most iPhone users/customers are probably better served by a market consisting mostly of native apps.

The difference between native and non-native apps for iPhone OS, is mostly analogous to the same difference a decade ago in PalmOS software. The native (and tedious to use) C toolset was the most popular, but there were many other (higher level) tools to choose from, and with frank honesty I can say that the results were generally poor, and almost always worse that write the same app with the native tools.

In fact, my firm Oasis Digital worked for at least two different customers on projects to replace applications written using such tools, with code we wrote in C or C++. Yes, it took longer; yes, it was more lines of code. Yes, it took more testing and was more prone to the kinds of problems low level code has, with memory leaks and the like.

But most critically, from the point of view of the users, our results with low level tedious implementation were obviously better. The high level tools available today for iPhone are much better than the Palm stuff a decade ago; but still, I suspect the same applies to a lesser extent on the iPhone-OS platform today.

Apple, Cut That Cord

As I think back to opening up my new iPad, two things stand out. First, the product packaging is delightfully minimal. It does not even contain media with the iTunes software, instead explaining that the first step is to download iTunes.

Second, a new iPad does not work at all until syncing it with a PC (Mac or Windows) via the included USB cable. I find this surprising because, in so many other ways, the iPad is a great device for someone who wants access to basic computing capabilities (web, email, casual games) without caring about the complexity of a PC. Yet such a user must already have a computer, with all its (potentially very-un-Apple) complexity, to use an iPad. The polished experience is potentially debased by being plugged in to a $200 closeout PC, just to get it running.

The need for iTunes seems quite justifiable in smaller Apple iDevices; but for the iPad, I can’t help but notice its specs are ample to be self-contained.

Even for those of us who will sync to a PC, why the cord? The iPad has WiFi and Bluetooth, as do most PCs nowadays. Plugging in that wire to sync seems utterly antiquated. Apple could could scrap that many-pinned proprietary port, and replace with a much simpler power plug, which would also free it from the pitiful rate of charge provided by USB. An iPad could have a MagSafe charger, and charge up more safely and in 1/2 the time.

Apple, cut that cord!