Full stack Angular – live coding – talk notes

I spoke (and live-coded) at the Advanced Angular Lunch in St. Louis, August 2019. The talk description:

Watch or heckle as Kyle from Oasis Digital live codes a full-stack Nx + Angular + Node + Nest + GraphQL project, with concurrent explanation and Q&A along the way. Mistakes will be made, and perhaps corrected. Lessons will be learned, but perhaps forgotten. You (might) see the productivity possible with “full stack Angular” – but this is real live coding so anything could happen.

In this abbreviated version of the talk, I explained more Nx than planned, but left off the final step (converting a REST API to GraphQL).

We recorded and streamed the talk. (If the video starts with silence, skip to 2:30.)

In lieu of slides, I used these notes during the talk (which was 80% live-coding).

Introduction

Let’s aim for an app that shows some people and some items of work they are supposed to do. Fortunately we already have some (very fake “Lorem”) data of this nature. Of course the app doesn’t matter, and is not intended to be interesting. But it will give us an opportunity to fetch data via GraphQL.

Already Installed

  • Node – nvm install 10
  • VSCode – https://code.visualstudio.com/
  • A back-end REST server: https://api.angularbootcamp.com/
  • Angular CLI – no install needed
  • Nest CLI – no install needed
  • Nx CLI – no install needed

To get started, I created an Nx project:

Project type: Choose “angular-nest”
Name: AppOne Use a name with no spaces
CSS: CSS

It can take a while to download everything, especially the first time.

Work in the project

Launch an IDE:

Run the Angular app for development in one terminal window:

Run the Nest (Node) server in another terminal window:

Remember to restart after adding libraries. Other than that, ideally (modulo imperfections in the tooling) you should be able to develop extensively without restarting the tools.

Quick Tour

This freshly created project has reasonable baseline modern practices done for you:

  • Project-level monorepo
  • Automatic formatting and linting
  • Intra-app dependency management
  • Unit and E2E test tools ready to use
  • Share client/server code

It’s 2019. This stuff is not the future, it is the present.

Avoid global scripts?

This is not necessarily a best practice, but it’s possibly a good practice. It should be on your radar. Install the Nest CLI locally in the project

Then add nest to the list of package scripts. These serve as hooks to allow use of typically-global tools without a global install.

Work around a Nx-Nest problem

The Nx Nest support is a work in progress. For the moment something like this is necessary to make it the schematics work.

Create a file nest-cli.json with the contents:

 

Live Coding – as time allows

Show list of employees:
Nest API server fetch from “legacy backend”.

Borrow syntax from a previous example.

Click to show detail: Detail page, show emp and task detail…

Fetch with GraphQL instead of REST

Do a mutation

Track selected emps, put state in Apollo addon

Adopt practices and layout?

Many teams spend countless hours and dollars discussing whether to do these things, how to do these things, when to do these things. Making plans about which release to start doing them. Carefully scheduling incremental rollout.

  • Best advice for many Angular++ projects:
  • Adopt many current practices en masse.
  • Start new project scaffold, generate apps and libs.
  • Cut/paste old code in. (“cut” until the old thing is empty, don’t “copy”)
  • Auto-format and lint-fix.
  • Furious editing, “hired guns” if needed.
  • Do all that quickly and move on.

Resources and Links

https://nestjs.com/

https://cloud.google.com/run/docs/quickstarts/build-and-deploy

https://nx.dev/

https://www.apollographql.com/docs/angular/

 

Mobile Lua – iOS and Android apps with Corona

On Thursday (May 26, 2011), I presented at the St. Louis Mobile Dev group, on cross-mobile-platform development with Lua. There are various ways to do this (including rolling your own), but for simplicity I used Ansca’s Corona product. The talk was somewhat impromptu, so I didn’t record audio or video. The slides are available:

… or as a PDF: 2011-Lua-Corona-Mobile-Dev.pdf

From this blog, you might get the impression that I use Lua extensively. That is not true; 95% of my work does not involve Lua in any way.

Lua Doesn’t Suck – Strange Loop 2010 video

At Strange Loop 2010, I gave a 20 minute talk on Lua. The talk briefly covered six reasons (why, not how) to choose Lua for embedded scripting. Lua is safe, fast, simple, easily learned, and more popular that you might expect.

The Strange Loop crew only recorded video in the two largest venues (out of six), so I made a “bootleg” video of my talk, for your viewing pleasure:

video

The video/audio sync starts out OK, but drifts off by a second or so by the end. The drift is minor, so it is reasonably viewable all the way through. If you don’t have Flash installed (and thus don’t see the video above), you can download the video (x264); it plays well on most platforms (including an iPad).

The slides are available for PDF download.


Video Hackery

This video recording was an experiment: instead of hiring a video crew (with professional equipment), or using my DV camcorder, I instead used the video recording capability of my family’s consumer-grade Canon digicam. This device has three advantages over my DV camcorder:

  1. No tape machinery; no motors; thus no motor noise in the audio.
  2. Smaller size, easier to carry in and out.
  3. Directly produces a video file, easily copied off its SD card.

As you can see from the results, the video quality is adequate but not great. Still, I learned that if I want to increase the quality of recording, the first step is not to use a better camera or lens! Rather, it is to bring (or persuade the venue to provide) better light. For good video results, the key is light the speaker well, without shining any extra light on the projector screen. With that in place, a better camera make sense.

The audio was a different story. Like nearly all consumer video cameras (and digicams with video), mine doesn’t have an external audio input, so the audio (from ~12 feet away) was awful. As a backup I had used a $75 audio recorder and a $30 lapel microphone, and that audio is very good, certainly worth using instead of the video recording audio track.

To combine the video in file A with the audio in file B, I used the ffmpeg invocation below. I reached the time adjustments below in just a few iterations of trial and error, by watching the drafts in VLC, using “f” and “g” to experiment with the audio/video time sync. I also trimmed off a bit of the bottom of the video, and used “mp4creator.exe -optimize”, which I had handy on a Windows machine, to prepare the file for progressive download viewing.

ffmpeg -y -ss 34.0 -i WS_10001.WMA -ss 34.0 -itsoffset -12.05 -i MVI_4285.AVI -shortest -t 8000 -vcodec libx264 -vpre normal -cropbottom 120 -b 400k -threads 2 -async 200 Cordes-2010-StrangeLoop-Lua.m4v

The remaining bits of technology are FlowPlayer, a WordPress FlowPlayer plugin, and a CDN.

SaaS: The Business Model – Video

On Feb. 27 at St. Louis Innovation Camp 2010, I gave a talk on the SaaS business model. I posted the slides, handout, audio, and transcript soon thereafter. Here, finally, is a video of the 44-minute-long talk. Why did it take over three months to get online? Read on below.

video

Warning: Sausage-making Discussion Below

The following has nothing to do with the content of the video.

This is an x.264 video, shown here initially with a Flash-only player (FV WordPress Flowplayer). Later I’ll replace this Flash-only widget with one that offers HTML5 video (for iPad use, in particular), when I find one that works sufficiently well.

That’s the easy part, though. Getting this video to you here was an adventure, and not in a good way. Three recordings were made of the talk:

  1. We hired a professional videographer to record the talk. When I say professional, I mean it only in the most literal way, i.e. the videographer charged money. They showed up with a nice camera and a wireless lapel mic… but somehow produced a broken video recording (the first 10-15 minutes were intermittant video noise). In addition, the mic gain was turned up way too high and thus the audio is awful.
  2. Dave Blankenship recorded the talk on his consumer camcoder; he was not paid for this, and he did a much better job. This video is usable all the way through, but arrived in an oddball format produced mostly by some models of JVC camcorders. The audio was not so hot, because he used the mic built in to the camcorder from the back of the room.
  3. I recorded the audio using a $5 microphone plugged in to an iPod Nano, sitting on a table at the front of the room. It’s a bit noisy, but with a few minutes of work with Audacity (Noise Removal and Normalization), the results are much better than either video attempt.

Armed with this, I set about to somehow combine the video from #2 with the audio from #3. I send emails describing this mess to several videographers I found on Craigslist. Most of them didn’t reply at all. I finally got a cost estimate from one, of many hundreds of dollars or more, and not much assurance of results.

Now I’m willing to spend some money to get good results, but spending it without confidence of results is less appealing; so I set about trying myself instead.

First, I cleaned the audio in Audacity as mentioned above.

Second, I watched the video and listened to the audio a few times, to get the approximate starting timestamp in each one of the moment the talk actually started; each recording had a different amount of lead-in time

Third, I grabbed ffmpeg, the swiss army knife of command line video and audio processing. After reading a dozen web pages of ffmpeg advice, and a number of experiments (with short -t settings, to quickly see how well it works without waiting to transcode the whole thing), I ended up with this command to produce the encoded video:

ffmpeg -y -ss 40.0 -i Recording-3-audio-only-clean.wav -ss 95 -i Recording-2-video-ok-audio-bad.mod -shortest -t 18000 -vcodec libx264 -vpre normal -b 700k -threads 2 Cordes-2010-SaaS.m4v

I then noticed that the MacPorts installation of ffmpeg omits the important qt-faststart tool, and found this helpful version of qt-faststart and used it instead, on my Mac; later I switched to a Linux machine with an ffmpeg install including qt-faststart. Without the faststart step, the metadata in the m4v file is arranged in a way that prevent progressive/streaming play-while-downloading.

The results are good but not great:

  • The video has some motion/interlace artifacts; these were present in the original recording, and I’m not aware offhand of what to do about them
  • The video camera used rectangular pixels; the pixel aspect ratio is 3:2 while it is intended for display at 16:9. I wasn’t able (at least in 20 minutes of learning and experimentation) to get the 16:9 output working correctly, so if you grab the underlying m4v file you can see the aspect ratio a bit off in the shape of the clock on the wall, for example.
  • The audio-video sync is adequate (and plenty good enough to follow along) but not perfect. Clearly using the audio track on a video recording is much better than putting them together in post-processing.
  • The audio is not as good as if I used a lav or headset mic, though I think it’s quite remarkably good for a $5 mic plugged in to iPod.
  • I’ve no idea if ffmpeg complies with any of the relevant copyrights/patents/whatever in video production, though it seems hopefully safe to use for a one-off non-commercial video like this. (Normally I use Apple’s iMovie for my videos, and I assume Apple has taken care of such things.)

A few morals of this story:

  • Get some powerful tools, and learn how to use them.
  • Be willing to pay for professional work, but be skeptical. Just because you pay, doesn’t mean it will be quality work.
  • Have a plan B. If I had assumed that at least one of the two videos would get decent audio, and skipped my own audio recording, I’d not have been able to deliver the acceptable audio here. If Dave had assumed that my professional videographer would produce results, and turned off his camera, we’d have no video here at all.

SaaS: The Business Model – Slides, Audio, Transcript

On Feb. 27 at St. Louis Innovation Camp 2010, I gave a talk on the SaaS business model. If you missed it, you might be interested in: