Windows 10 Bootable USB from ISO on Linux

Getting updated Windows media is certainly easier than it used to be. Back in the day, you'd have to slipstream updates into your installation media. It was a pain, and prone to cause problems (mainly because you could slipstream other things in there, too).

Microsoft has seen the light and provides ISO downloads of the current version of Windows (Note: This is not a "free" Windows license -- you still need to pay for that, or install on a machine which has been previously licensed).

Actually using that ISO is not quite straight forward. You could burn it to a double-sided DVD, but I don't even own a computer with an optical drive currently. USB is the way to go.

With years of experience creating bootable linux USBs, I did the standard thing:

$ dd if=Win10_1809Oct_English_x64.iso of=/dev/sdX

However, while Linux ISOs are typically multi-boot images (files that can boot both CD and USB, on both BIOS and UEFI machines), it appears Microsoft has not leveraged that knowledge. This USB fails to boot.

The only official way to create USB-bootable Windows media is to use the "Media Creation Tool" on Windows. You may encounter the same issue I did: I don't have Windows installed yet!

There is an alternate way, using a project called woeusb.


Running the GUI simply doesn't work, as more command-line options appear to be required.

One of the files in the image is larger than 4GB, which requires using NTFS on the USB disk. The following command is what I used to flash the Windows 10 1809 ISO to my USB stick (be sure to get the correct device name for your USB -- this command will wipe it).

$ sudo woeusb --target-filesystem NTFS --verbose --device Win10_1809Oct_English_x64.iso /dev/sdX

Be aware that this USB media will not boot with secure-boot enabled, though the resulting Windows 10 installation will work with secure-boot.

DDPai M6 Plus Dashcam Review

I've spent the better part of one and a half years with the DDPai M6 Plus. I bought it based on the positive Techmoan review.

tl;dr: I'm somewhat disappointed in the device. Something simpler like an A119S is a much better buy.

Some background on me and dashcams

I had a Mini 0805 in my last car, and have two Mini 0805's (front+rear) in my second car (I just had the two cameras, and moved the front one into whichever vehicle I was driving). I was actually quite happy with the cameras, despite some complaints online (overheat, soft/blurry video, etc).

However, in the spring I noticed the audio was intermittantly poor on one of the cameras. Sometimes it would work, sometimes it wouldn't. I decided to live with it. However, I recently purchased a new car, and decided to install a new, working camera. I decided I wanted some new features the 0805 didn't offer:

  • Parking mode
  • Wifi to retrieve an "occurrence" from my phone

While doing my research, Techmoan reviewed the DDPai M6 plus. He seemed quite impressed, and it seemed to check off all my boxes. I bought one, and hard-wired it in.

On with the review

I purchased the camera in July 2016, and I've written this over the following 1.5 years (failing to post it sooner, it has been revised accordingly).



  • Without additional hardware, parking mode will kill your battery.
  • LiIon battery instead of a capacitor.
  • Big. This camera just barely fits behind my mirror. It is much more noticable than my 0805 ever was.
  • Wifi is always on, so my phone tries to connect to it periodically. Be sure to change the passphrase.
  • The app isn't really that good. Took me a while to find how to download regular non-saved video (even after looking it up), and found I can only do it in smaller clips at a atime. Effectively, you need to yank the SD card to retrieve large amounts of video. This isn't the end of the world (it's faster), but means the wifi feature is effectively not used.
  • The app has cloud features that don't work when connected to the camera due to the lack of an internet connection when connected to camera's wifi.
  • It speaks Chinese when it turns on.
  • 25fps. Weird. Although I guess for the purpose, this doesn't really matter.

My biggest problem has been with the parking mode. It will frequently simply record realtime video for hours on end, instead of entering the timelapse mode. This causes unecessary writes to the SD Card, and overwrites previous clips much earlier than expected. Furthermore, it will drain my car battery in two or three days without using my car. In colder months, this has lead to a significant use of my booster pack on Monday mornings. There is additional hardware that can solve this, but factor that into the purchase price.

I also found the wifi itself very hard to use, since my phone didn't want to stay connected to it due to the lack of an internet connection. I can't fault DDPAI for this, as Android is just trying to make sure I get my email. A "Use only for local network" checkbox in Android's wifi settings would be helpful.

Additionally, the wifi is very slow, short range, and prone to disconnects. I've taken to simply yanking the card to retrieve video, but since it has no screen, the app is still required for various functionality (periodic SD formatting, configuration, etc). A camera with a built-in screen is much easier to deal with.


  • It gets very hot during the day
  • Particularly after recording for a while, the sound can be noticably delayed from the video. See this sample of cars driving past me. Also, at 1:00 I drive over some tracks, and the sound is delayed by about a second. (although this was solved with a firmware update. However, there are more recent firmware updates that I can't apply due to the app simply failing to push the firmware over).


It's not necessarily a bad device, it's just not a great one either. $173 CDN is quite a steep price for a camera with a LiIon battery and functionality issues. Ultimately, a capacitor version with battery-drain protection would go a long way toward convincing me the higher price tag is worth it, even if the app stayed as-is.

Ultimately, it's better than my 0805, but not enough to justify the $80 extra. The A118-C was $75 cheaper than the M6 Plus (and the A119S is available at that price now). you'd lose parking (although that might not be wisely usable on the M6 Plus) and wifi (although it's not really a great feature), but you gain a capacitor, a much slimmer design, and a truely stand-alone device.

tl;dr the tl;dr: Probably buy something else

Push It To the Limit #3

If you're considering trying out autocross, I say go for it. I'm very new, and have found people at the two events I've attended (WOSCA #1, and PITL #3) to be friendly and extremely helpful.

Additionally, you can do it with your own car. You don't need some sort of special race-spec track beast. Both events have also had loaner helmets available (although I spent $200 and bought my own helmet meeting the appropriate standards).

I've posted a video of the PITL #3 event on youtube. Additionally, I've created a playlist, which also contains my WOSCA videos.


When I left off after my third adventure with ffmpeg, I decided that I wanted to attempt video transitions next time. Well, it's next time.

Individual Clip Preparation

Like always, I concatenated my dashcam video chunks, then trimmed them as appropriate. This was done like my previous posts, using -codec copy to preserve quality.


I decided to attempt a timelapse intro/outro of my drive to the event. 32 times seemed to be a good balance of not-too-long, while also being somewhat fluid.

ffmpeg -i -filter:v "setpts=0.03125*PTS" -an -strict -2;

That command is the trimmed commute clip as input. The filter is basically speeding up the playback. 1*PTS would be normal speed, and 1/32 = 0.03125.

I did not timelapse the audio, although that is possible too.

Other prep

I also made title images in Gimp, like before.

Performance considerations

I went through several iterations to reach the final video I wanted. The best resource I had was this post by Mulvya on Stack Overflow.

Initially, I had five video streams, configured to fade in/out as required. However, I very quickly hit a performance issue related to ffmpeg's overlay filter. Previously I used concat, but performing a transition requires the videos to actually overlay.

To illustrate the issue, assume you have two one-minute videos, and you want a two-second transition between them. Your final video will be 1:98. ffmpeg will stretch the 1 minute video's last frame to the length of the video, which means it is now compositing 98 additional seconds that could technically be preserved unchanged. Attempting to work around this (by fading out the source video, trimming it, etc) seem to basically mean you're compositing a transparency. The performance is still affected.

Then remember I'm compositing five videos together, each one building on the previous. This very quickly went from real-time to 5fps processing time.

The "solution" is to chunk each clip into three parts: A A short beginning and ending clip a few seconds long, and a middle clip that requires no modifications. Now you can overlay the two 2 second clips to create your transition, and concat the other 1:96.

The performance benefit is substantial.

The Video Build Command

The command I ended up with is basically a big mess, so I'll just get start with it, then pick apart each section:

 4 time ffmpeg \
 5 -i \
 6 -i \
 7 -i \
 8 -i \
 9 -i \
10 -i \
11 -loop 1 -i ../../../PITL_logo.png \
12 -loop 1 -i ../../../0-PITL.png \
13 -loop 1 -i ../../../1-Run1.png \
14 -loop 1 -i ../../../2-Run2.png \
15 -loop 1 -i ../../../3-Run3.png \
16 -loop 1 -i ../../../4-Run4.png \
17 -loop 1 -i ../../../5-Run5.png \
18 -filter_complex "
19   [0:v]  trim=start=0:end=15,setpts=PTS-STARTPTS                                [clip0start];
20   [0:v]  trim=start=15:end=170,setpts=PTS-STARTPTS                              [clip0];
21   [0:v]  trim=start=170:end=172,fade=out:st=170:d=1:alpha=1,setpts=PTS-STARTPTS [clip0end];
23   [1:v]  trim=start=0:end=10,setpts=PTS-STARTPTS                                [clip1start];
24   [1:v]  trim=start=10:end=88,setpts=PTS-STARTPTS                               [clip1];
25   [1:v]  trim=start=88:end=90,fade=out:st=88:d=1:alpha=1,setpts=PTS-STARTPTS    [clip1end];
27   [2:v]  trim=start=0:end=10,setpts=PTS-STARTPTS                                [clip2start];
28   [2:v]  trim=start=10:end=106,setpts=PTS-STARTPTS                              [clip2];
29   [2:v]  trim=start=106:end=108,fade=out:st=106:d=1:alpha=1,setpts=PTS-STARTPTS [clip2end];
31   [3:v]  trim=start=0:end=10,setpts=PTS-STARTPTS                                [clip3start];
32   [3:v]  trim=start=10:end=106,setpts=PTS-STARTPTS                              [clip3];
33   [3:v]  trim=start=106:end=108,fade=out:st=106:d=1:alpha=1,setpts=PTS-STARTPTS [clip3end];
35   [4:v]  trim=start=0:end=10,setpts=PTS-STARTPTS                                [clip4start];
36   [4:v]  trim=start=10:end=98,setpts=PTS-STARTPTS                               [clip4];
37   [4:v]  trim=start=98:end=100,fade=out:st=98:d=1:alpha=1,setpts=PTS-STARTPTS   [clip4end];
39   [5:v]  trim=start=0:end=10,setpts=PTS-STARTPTS                                [clip5start];
40   [5:v]  trim=start=10,setpts=PTS-STARTPTS                                      [clip5];
42   [6:v]  trim=start=0:end=15,fade=out:st=3:d=1:alpha=1,setpts=PTS-STARTPTS                           [logo];
43   [7:v]  trim=start=0:end=15,fade=in:st=5:d=1:alpha=1,fade=out:st=12:d=1:alpha=1,setpts=PTS-STARTPTS [title0];
44   [8:v]  trim=start=0:end=9,fade=in:st=3:d=1:alpha=1,fade=out:st=7:d=1:alpha=1,setpts=PTS-STARTPTS   [title1];
45   [9:v]  trim=start=0:end=9,fade=in:st=3:d=1:alpha=1,fade=out:st=7:d=1:alpha=1,setpts=PTS-STARTPTS   [title2];
46   [10:v] trim=start=0:end=9,fade=in:st=3:d=1:alpha=1,fade=out:st=7:d=1:alpha=1,setpts=PTS-STARTPTS   [title3];
47   [11:v] trim=start=0:end=9,fade=in:st=3:d=1:alpha=1,fade=out:st=7:d=1:alpha=1,setpts=PTS-STARTPTS   [title4];
48   [12:v] trim=start=0:end=9,fade=in:st=3:d=1:alpha=1,fade=out:st=7:d=1:alpha=1,setpts=PTS-STARTPTS   [title5];
50   [clip0start][logo]     overlay [clip0logo];
51   [clip0logo] [title0]   overlay [clip0transition];
53   [clip1start][title1]   overlay [clip1title];
54   [clip1title][clip0end] overlay [clip1transition];
56   [clip2start][title2]   overlay [clip2title];
57   [clip2title][clip1end] overlay [clip2transition];
59   [clip3start][title3]   overlay [clip3title];
60   [clip3title][clip2end] overlay [clip3transition];
62   [clip4start][title4]   overlay [clip4title];
63   [clip4title][clip3end] overlay [clip4transition];
65   [clip5start][title5]   overlay [clip5title];
66   [clip5title][clip4end] overlay [clip5transition];
68  [clip0transition] [clip0]
69  [clip1transition] [clip1]
70  [clip2transition] [clip2]
71  [clip3transition] [clip3]
72  [clip4transition] [clip4]
73  [clip5transition] [clip5] concat=n=12 [vout]
74 " \
75 -map "[vout]" \
76 -aspect '16:9' \
77 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p \
78 -movflags faststart \
79 -strict -2 \
80 00_final_video.mp4


  • Lines 5-17 are my source input files. All visual files are here (video, overlays, etc).

  • Lines 19-21 are the chunking of the first video into start/main/end segments. This is then repeated for each clip through to Line 40

  • setpts to guarantee every video segment starts its counters at 0 (instead of their pre-existing timestamps).
  • trim is used to cut the video. Values calculated by hand.
  • The end of each video has a fade-out. You could do a fade-in on the start segment, but that has additional quirks I'll discuss shortly.
  • Lines 42-48 are the title cards (png files from Gimp)
  • Each configured with a fade-in, and fade-out, and trim values.
  • Line 53/54 are the short overlay transitions.
  • Line 53 overlays the title card on the starting video
  • Line 54 overlays the fade-out transition of the ending video on-top of the output from Line 53

It is important to note here that you could use a fade-in on the start video instead. However, you'll need your fade-out to be the exact correct length, otherwise your fade-in will be truncated, and/or your fade-in won't transition seamlessly to the rest of the clip. This way was easier.

  • Lines 50/51 are very similar, but for the Logo and main title.

  • Lines 68-73 simply concatenate the short transition overlays we've created, with the untouched video clips inbetween.

This saves soooo much processing time.

  • Line 75 maps the final concat output to the file

  • Line 76 fixes the aspect ratio

  • Line 77-79 sets the youtube video codec stuff

  • Line 80 is the output file name.

Now, that's pretty complex.

The Audio Build Command

You also might notice there's no sound. Attempting to do sound at the same time caused a number of buffer issues, video stutter, sound not lining up. Furthermore, while this video renders "faster" than a pure-overlay approach, we're still talking >10 minutes, so trial-and-error to fix sound was very time consuming. I eventually decided to just do the sound separately, then merge them afterwards.

100 time ffmpeg \
101 -itsoffset 0 -i ../../../Alternate.mp3 \
102 -i \
103 -i \
104 -i \
105 -i \
106 -itsoffset 0 -i ../../../Drifting_2.mp3 \
107 -filter_complex "
108   [0:a]  atrim=start=0:end=171,asetpts=PTS-STARTPTS [clip0sound];
109   [1:a]  atrim=start=0:end=89,asetpts=PTS-STARTPTS  [clip1sound];
110   [2:a]  atrim=start=0:end=107,asetpts=PTS-STARTPTS [clip2sound];
111   [3:a]  atrim=start=0:end=107,asetpts=PTS-STARTPTS [clip3sound];
112   [4:a]  atrim=start=0:end=99,asetpts=PTS-STARTPTS  [clip4sound];
113   [5:a]  atrim=start=0:end=246,asetpts=PTS-STARTPTS [clip5sound];
115  [clip0sound] [clip1sound]  acrossfade=d=1  [a01];
116  [a01]        [clip2sound]  acrossfade=d=1  [a012];
117  [a012]       [clip3sound]  acrossfade=d=1  [a0123];
118  [a0123]      [clip4sound]  acrossfade=d=1  [a01234];
119  [a01234]     [clip5sound]  acrossfade=d=1  [aout]
121 " \
122 -map "[aout]" \
123 -vn \
124 -codec:a aac -strict -2 -b:a 384k -r:a 48000 \
125 -movflags faststart \
126 -strict -2 \
127 00_final_audio.mp4

This command is actually very similar. Interestingly, there is a crossfade functionality for audio. Why this doesn't exist for video, I'll never know.

  • Lines 101-106 are the input files. Note that I used mp3 files for the timelapses. These are from the Youtube Audio Library (attributed appropriately on the youtube video)

  • Lines 108-113 are similar time clipping, using values similar to the source videos (but all 1 second off, due to the two-second overlap I used for the videos)

  • Lines 115-119 are stringing the audio together using the acrossfade filter.

Merging Audio and Video

ffmpeg -i 00_final_video.mp4 -i 00_final_audio.mp4 -codec copy 00_final_merged.mp4

Not much too it. You may need to do a few rounds of Audio & Merging to ensure the sound lines up with the video.

Highlight Reel


I want to combine a few clips together, with a 5-seconds of intro text on each one.

Create overlay text in GIMP

I created some overlay text in gimp, then exported to png files. An example (Note the transparancy, and drop shadow):

sample overlay png

Trim clips to length

Using the methods I've described in previous ffmpeg posts, I trimmed the clips, ensuring that there is at least five seconds of lead-in on each clip for the text.

ffmpeg -i 1-Tire-Squeal-Front.MOV -ss 1:01 -to 1:24 -c copy 1-Tire-Squeal-Front.trim.MOV

Overlay text on video clips

Then I overlayed the PNG on top of the video for five seconds, thanks to Google leading me to mark4o on Super User:

ffmpeg -i 1-Tire-Squeal-Front.trim.MOV -loop 1 -i 1-Tire-Squeal-Front.png -filter_complex "[1:v] fade=out:st=5:d=1:alpha=1 [ov]; [0:v][ov] overlay=0:0 [v]" -map "[v]" -map 0:a -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart -to 0:23 1-Tire-Squeal-Front.title.mp4

I basically just used his example, except I changed the duration, and offset since my title png is already 1920x1080. Also, all that youtube codec stuff.

I did have trouble with the last few seconds of video being clipped with -shortest. I ended up specifying the appropriate -to length.

Overlay trailing text on video clip

I wanted an ending text on the video, but was having trouble getting two clips to match exactly (and doing the video like the above). Then I realized I could just use a slightly different filter:

ffmpeg -i 3-Bob.trim.MOV -loop 1 -i 3-Bob.png -loop 1 -i 4-Oh.png -filter_complex "[1:v] fade=out:st=5:d=1:alpha=1 [ov]; [2:v] fade=in:st=16:d=1:alpha=1 [oe]; [0:v][ov] overlay=0:0 [v]; [v][oe] overlay=0:0 [vf]" -map "[vf]" -map 0:a -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart -to 21 4-Bob.title.mp4

This creates a third input stream, which is also a looped image. We create stream [oe] (overlay end), which does a fade-in at 16 seconds. We then take [v] and overlay [oe] on it, creating [vf]. We then map [vf] to our output.


The video is what I set out to make. It might have been nicer to do some audio fades, but I'll live.

Next time

Now that I'm done, I think it would have been a better idea to have the overlays fade in and out, as well as figure out better scene transitions. Next time...

ZigZag Volvo

While driving through New York on my way to the Watkins Glen 2017 opening weekend, I encountered a nut in a Volvo zig-zagging through highway traffic. I decided use ffmpeg yet again (I'm starting to wonder if OpenShot would actually bring anything to the table at this point, besides crashing)


I want a video with the rear camera until the Volvo passes, then the front camera after. I want to use the front audio for the whole video.


Luckily, the front & rear videos were synced to within a fraction of a second. Close enough.


Rear video, starting at 2:40, ending at 4:24, when the car passes me.

ffmpeg -i ../Speedy_Gonzales-Rear.MOV -ss 2:40 -to 4:24 -c copy Rear.trim.MOV

Front video, starting at 2:40, but ending at 4:57, when I want the video to end

ffmpeg -i ../Speedy_Gonzales-Front.MOV -ss 2:40 -to 4:57 -c copy Front.trim.MOV


This was pretty easy. I used an overlay as usual, but didn't perform the scaling. I also instructed the overlay filter to continue with the original video when complete:

time ffmpeg -i Front.trim.1080.MOV -i Rear.trim.MOV -filter_complex "overlay=0:0:pass" -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart Combined-FrontAudio.mp4

You'll note I'm using the same ffmpeg voodoo I used last time to help youtube processing time.

I was expecting to have to remove the rear audio, but interestingly enough, it was gone already. I suspect the overlay filter killed it? No idea, actually.


Except not so fast! I had acidentally recorded both videos at different resolutions. The rear was doing 1920x1080, while the front was 2560x1080. This causes weird video like this:

htop screenshot

Additionally, while 2560x1080 looks great to me (I have an ultrawide monitor), it letterboxes on a standard 1920x1080 display. Additionally, youtube adds that letterboxing during their processing, so watching my own 2560 video on youtube gets black bars on all four sides. Lovely. So I need to crop the video as well:

time ffmpeg -i Front.trim.MOV -i Rear.trim.MOV -filter_complex "overlay=0:0:pass,crop=1920:1080:0:0" -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart -to 10 Combined-FrontAudio.mp4

I'm fine with keeping the left of the front frame, as it looks "good enough". If I wasn't, I'd have had to have adjusted the overlay offset, as well as the crop offset.

Just as a note, I fixed the resolution (and removed the datestamp from the rear camera) before I hit the track on Saturday. So I didn't have to the cropping for the videos I put on youtube earlier (but filmed chronologically after).

CPU takes it easy

My CPU seems to be taking it easy today. Maybe its upset I made it work on Easter? Or this task just didn't parallelize as well.

htop screenshot


The finished video turned out as well as I had hoped. I left in the first 90 seconds (despite mostly being a Pringles-related discussion) because, like any traffic video online, there often isn't enough context. Also, it's always the cammer's fault, so I guess more incriminating evidence...

ffmpeg part three - No more Boogaloos to give

Just like the first two times, I'm assembling my Watkins Glen 2017 track footage with ffmpeg.

However, I encountered a small issue I didn't last year, plus I decided to change things up a bit with codecs, and audio selection.

ffmpeg requires protocol whitelist now

I'm using pretty much the same concatenation command as last year (filenames are a bit different):

$ for f in Front-*MOV; do echo file "$f"; done | ffmpeg -f concat -i - -c copy Front.MOV

The error I got from ffmpeg looked like:

[file @ 0x55d7ded86680] Protocol not on whitelist 'crypto'!
[concat @ 0x55d7ded7cc80] Impossible to open 'Front-AMBA0009.MOV'
file:: Invalid argument

I used my google-fu and found a helpful blog that pointed me at the cause. This is actually a pretty decent security feature, and should prevent ffmpeg from reaching out to the world without your knowledge. However, it appears the default behaviour (at least from rpmfusion) differs from the man page:

protocol_whitelist list (input)
    Set a ","-separated list of allowed protocols. "ALL" matches all
    protocols. Protocols prefixed by "-" are disabled.  All protocols
    are allowed by default but protocols used by an another protocol
    (nested protocols) are restricted to a per protocol subset.

Unless file was disallowed becayse it was used via pipe. Anyway, solution was simple enough: Whitelist 'file', and 'pipe' (we're piping in the file list).

$ for f in Front-*MOV; do echo file "$f"; done | ffmpeg -protocol_whitelist file,pipe -f concat -i - -c copy Front.MOV

Helpful options to make youtube processing faster

Uploading to youtube is always a lengthy process, as youtube needs to reprocess the video. They have documented the ideal format characteristics. And I found a blog post describing the ideal youtube paramters in ffmpeg terms, so I didn't have to read the man page again. Yay.

The concatenated videos (above) are just using the 'copy' codecs, so I'll leave them as-is. Only when combining the front & rear video will I apply the new settings:

$ time ffmpeg -i Front.trim.MOV -filter:v "movie=Rear.trim.MOV, scale=480:-1 [inner]; [in][inner] overlay=1370:740 [out]" -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart Combined-FrontAudio.mp4

Also, you might want to do this on a powerful computer. My laptop heat throttled and was converting at 8fps. Copied the video to my server, and got 45fps. (I also had a load average of 22!!!). Your mileage will vary depending on your CPU:

htop screenshot

Note that hardware encoding is not an option here due to the video overlay filter.

Audio from rear camera seems better

While comparing the Front & Rear audio clips, it appeared that the audio from the rear camera might be a bit better. It picked up our talking a little bit less, and picked up the (rear/mid) engine noise a little bit better. Also, the front camera had an intermittant click/hiss/whine in some videos. I haven't figured that out yet.

I wanted to do a side-by-side comparison of front & rear audio, so I needed both output files. Additionally, since the bulk of the conversion time was video, and that isnt' changing, I can just swap out the audio on the video from above.

We'll use -map to grab the first input video (0:v), and the second input audio (1:a):

$ time ffmpeg -i Combined-FrontAudio.mp4 -i Rear.trim.MOV -map 0:v -map 1:a -codec:v copy -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart Combined-RearAudio.mp4

Note that I re-applied the youtube audio codec wizardry.

Watch the video

You can watch the three videos of the Watkins Glen 2017 trip on youtube

Using Metadoctor on HP/Palm WebOS Devices

My Devices

WebOS Devices closed

WebOS Devices open

I decided to update the 3/4 of my Palm WebOS devices. The Pre (not pictured) and Pre2 (middle) were my primary, daily-driver phones for over two years, from September 2009 through to spring 2012, when I acquired a Galaxy Nexus and made the jump to Android.

The Pre3 (right) I also picked up on eBay. It came in box, with all accessories, and a spare battery.

The Pixi (left) I picked up cheap on eBay. "For Parts", because it "Does not move from activation screen". Now, the Pixi was a low-end device when it launched. It doesn't even have WiFi. This isn't a device you'd buy to use. But as a curiosity, I was interested. Due to the Palm servers being taken offline in January 2015, the activation process will never complete. Interestingly, this "Sprint" Pixi has a "Verizon" faceplate, possibly swapped from a Pixi Plus at some point. Model number confirms it is a Sprint device..

Pixi in Activation Loop

The Pre & Pixi are CDMA phones, locked to Bell Mobility and Sprint respectively. So they're effectively useless.

The Pre2 & Pre3 are HSPA+ phones, and I should be able to use them on my current provider, assuming I can locate a SIM adapter.


Following the MetaDoctor Wiki is fairly straight forward.


  • "WebOS Doctor" is Palm's official firmware update program + firmware images

  • "MetaDoctor" is a community-driven makefile that alters the above "WebOS Doctor" images with user-chosen modifications.

Fetching Firmware

After fetching metadoctor from github, you're required to fetch the WebOS firmware images from Palm. Palm, of course, doesn't exist currently. Thankfully, you can get all WebOS Doctor images via

You'll have to rename the files after download to append the WebOS version to the filename. In my case:

# Bell Pre
mv webosdoctorp100ewwbellmo.jar webosdoctorp100ewwbellmo-1.4.5.jar 

# Sprint Pixi
mv webosdoctorp200ewwsprint.jar webosdoctorp200ewwsprint-1.4.5.jar 

# Unlocked Pre2
mv webosdoctorp224pre2wr.jar    webosdoctorp224pre2wr-2.2.4.jar 

# AT&T Pre3
mv webosdoctorp224mantaatt.jar  webosdoctorp224mantaatt-2.2.4.jar 

Build Modified Firmware

An important resource is the MetaDoctor README file. It outlines all the options available. Myself, I just want to a few options. Instructions say to modify the Makefile, but passing args works just fine. These make commands will output a few assembled firmware images to the build directory.


Note that I'm using CARRIER=att for the Pre3, due to this being an AT&T phone. However, ADD_EXTRA_CARRIERS=1 should provide whatever additional data (APN?) for it to work on my network. The Pre2 uses CARRIER=wr. There is no "rogers" carrier, but it is on the wiki. However, it links to the webosdoctorp224pre2*wr*.jar file.

Install Firmware

Interestingly, WebOS Doctor images are more than just a firmware image, like you'd have with Android (or other devices). They're actually executable java bundles, which will push the encased firmware to the device. So you execute it on your computer. On the bright-side, they're Java, so they should work on any platform. On the super-bright-side, they actually work just fine with the OpenJDK installed in Fedora. No need to tarnish your system with Sun/Oracle Java.

$ java -jar build/pixi-p200eww-sprint-1.4.5/webosdoctorp200ewwsprint-1.4.5.jar

Now, the down-side (of course there was one). There seems to be some trouble actually finding the USB device. It seems that it uses platform-specific builds of novacom. Novacom is sort of like both fastboot and adb utilities in the Android world. Apparently there are libusb woes here. There are instructions on chasing down old libraries to resolve this issue...

Luckily, none of that matters on Fedora 24, since novacom is actually packaged.

$ sudo dnf install novacom
Last metadata expiration check: 0:52:44 ago on Wed Sep 14 22:24:59 2016.
Dependencies resolved.
 Package          Arch     Version                              Repository   Size
 novacom          x86_64   1.1.0-0.11.rc1.git.ff7641193a.fc24   fedora       10 k
 novacom-client   x86_64   1.1.0-0.11.rc1.git.ff7641193a.fc24   fedora       24 k
 novacom-server   x86_64   1.1.0-0.14.rc1.fc24                  fedora       52 k

Transaction Summary
Install  3 Packages

Total download size: 86 k
Installed size: 131 k

You'll need to run it as root to allow direct access to USB devices (You can probably configure some udev permission rules, but I won't be doing this often enough to bother). You can instruct it to fork, but I preferred opening a second terminal, so I could the process when I'm done.

$ sudo novacomd

Update: A systemd unit is actually installed. sudo systemctl start novacomd would be the correct method to start the service

You should see a few lines of output indicating a device was found.

Back in our metadoctor terminal, run the WebOS Doctor firmware we assembled earlier:

$ java -jar build/pixi-p200eww-sprint-1.4.5/webosdoctorp200ewwsprint-1.4.5.jar

A few clicks of "Next" should get your firmware flowing. Note that one of the 'next' buttons will be disabled if your device isn't found.

WebOS Doctor

Devices stuck in Activation

As mentioned above, the Pixi was stuck in perpetual activation. You'll need to force it into a mode that will accept firmware. This is sort of like bootloader mode on Android. This process should be identical for all Palm WebOS hardware.

  • Turn the device off
  • Hold
  • Either press , or Plug in USB (both turn on the phone)

The updater should see the device and enable you to load the firmware.

After a successful update, the initial boot screen will be a picture of Tux with a Palm Pre.

Pixi Tux

End of Pixi fun

After booting up the Pixi, I noticed it immediately started in "International Roaming" mode, notified of pending voice mail, and has a phone number assigned to it.

I've immediately placed it into airplane mode, then yanked the battery. Since this device lacks Wifi, there is very little reason to look at it again.

WebOS 2 on Pre

There is a process to install WebOS 2 onto WebOS 1 devices, like the Pre and Pixi. I actually used WebOS 2 on my Pre when it was my primary device back in 2011. It added some notable features, such as voice dialing (handy with bluetooth). It was perfectly functional, though a tad sluggish at times (it was targeting better hardware).

That said, backporting WebOS 2 to these devices is an interesting exercise. There is a WebOS 2 upgrade procedure, which involves running the appropriate script from within the metadoctor directory:

$ scripts/meta-bellmo-pre-2.1.0

The script will list the multiple WebOS Doctor images required at the top. You'll need to preemptively fetch these, as the URLs are bad (pointing to offline Palm servers, again). Note that they use a slightly different naming scheme (they do not get the version appended to the end).

The firmware install procedure is the same as above.

WebOS in 2016

Note: Yes, WebOS Technically still exists. However, this article isn't talking about the TV OS version made by LG.

While this article was actually written to discuss getting Palm devices usable, I felt some preamble was necessary.

My Palm History

I've got a soft spot for Palm. My first PDA was the Palm Vx, possibly the greatest PDA ever made. Easily get days of battery from a device that can store all your calendar and contacts information, synchronizing periodically with your master copy on your computer.

Palm lost its way with later versions of PalmOS. Devices got more spec competitive, but batteries didn't. My Tungsten T3 while technically superior in every spec, was actually less capable of performing it's primary PIM tasks than the Vx before it.

We'll just skip over the dark years:

  • Software and Hardware divisions split
  • Software eventually dead-ends in development hell (taking the remains of BeOS with it)
  • Windows Mobile on Palm Hardware

Eventually Palm decided to get their act together.

Hello WebOS

Palm's WebOS showing blew me away. I wanted one so much that when they finally released in Canada, I walked into a Bell store in September 2009 and signed a 3-year contract, despite being laid off work only a week earlier.

The biggest complaint was the Palm Prē launch hardware. While very comparable spec-for-spec with the iPhone 3G and 3GS, WebOS featured multitasking with multiple running applications. Apple didn't. End result was the Prē was often laggy and slow. It was unfortunate.

Palm quickly released an updated Prē Plus in May 2010 with twice the memory. Eventually, the Prē2 (released October 2010) would reveal what a WebOS device should be like. I actually had one shipped up from the United States (since Bell had started converting from CDMA to HSPA+, just in time).

HP's wallet to the rescue

However, Palm was struggling financially. Before the Prē2 was released, HP bought Palm. This was a good thing: Palm will finally have the financial backing to take on Apple and Android platforms.

Looking forward, the Prē3 was on the horizon for 2011. Featuring a major CPU upgrade, a significantly higher-res screen, and a refreshed design. Launching alongside the TouchPad tablet, there were some amazing demos. Tapping content between devices was mind-blowing at the time.

Anticipation was high. Months passed.

Finally, the Prē3 launched in Europe on August 16, 2011. "US is coming up in the near future". Two days later, on August 18, 2011, HP kills Palm.

There was hope that the OS would continue, other hardware vendors might pop-up. Unfortunately, none of that happened (LG smart TVs aside).

Palm Hardware in 2016

Due to the very open nature of WebOS (every device being a development device after a quick, official procedure) there was already a wide community modifying WebOS.

It was due to the efforts of these people that WebOS 2 was backported to the unsupported Prē (including features like voice dialing, etc).

The homebrew scene was particularly active with the "Preware" package manager, and alternative to the official App Catalog. Additionally, the nature of an OS built on HTML and Javascript opened the doors to a wide variety of patches and modifications.

WebOS MetaDoctor allowed users to spin a new, customized WebOS image with pre-applied applications, patches, and features (including massaging hardware support). Then was then flashed to the phone for a customized user experience.

In late 2014, HP announced they were shutting down the authentication servers. Considering a Palm account was a required account for device activation, this had the potential to be the final nail in the WebOS coffin if it wasn't for MetaDoctor allowing you to skip the Palm account step.

WebOS in 2016

Luckily, HP decided to Open WebOS, as well as the Enyo app framework. This allowed others to continue development of the platform.

LuneOS was born, and began modernizing. While this unfortunately means that it can't run on the old Palm hardware, it does run on (slightly) newer Nexus hardware (Galaxy Nexus and Nexus 4).

Followups to come

Stay tuned for articles on both Meta-doctoring Palm hardware, as well as running LuneOS on a Nexus 4.

Snapperd on Fedora with SELinux enabled

Snapper is an excellent utility that provides hourly snapshots of btrfs subvolumes.

Fedora ships with selinux enabled by default. This is excellent, and shouldn't be disabled. To allow this, most software in Fedora has appropriate rules defined, including snapper.

However, snappers rules only allow it to work on / and /home. If you wish to use it to snapshot /mnt/data, or /srv, or any other particular path, you're going to have a very bad time.

While it is certainly possible to define new rules for paths you wish to back up, I decided that in this one particular case, snapper should be allowed free reign.

sudo semanage permissive -a snapperd_t

The above command tells selinux to treat snapperd_t (the context snapperd runs within) as permissive. Rule violations will still be logged, but snapper will be allowed to continue.

ffmpeg part two - Electric Boogaloo

I just attended the Watkins Glen opening day for the second year. It was, again, a blast.

I made some slight adjustments to my ffmpeg assembly procedure from last year.

Dashcam saves video in 5-minute chunks

Instead of creating .list files, I simply used a pipe as input:

for fo in AMBA091*; do echo file "$fo"; done \
    | ffmpeg -f concat -i - -c copy

Front and Rear videos need to be combined

Much like last year, I made short samples to confirm if any offsets needed to be done. However, I decided to move the video to the bottom-right corner to cover the timestamps, since they were incorrect on some videos (well, correct. Just not for this time zone)

Math is basically the same as before for scaling, but instead of a left offset of 70, we want a right offset of 70, but using left-hand coords. Which works out to:

1920 - 480 - 70 = 1370

After the usual synchronization samples were made, it was time to perform the final assembly.

I used a slightly different file layout this time, keeping the front and rear videos separated. I used a loop to assemble them into a combined video:

$ time for FILE in Track{1,2,3}; do ffmpeg -i Front/DCIM/*${FILE}*mov -vf "movie=Rear/DCIM/Rear-${FILE}, scale=480:-1 [inner]; [in][inner] overlay=1370:740 [out]" -strict -2 ${FILE}.mov; done

You're going to want a good CPU. This is the execution time for just under 48 minutes of video on an Intel i5-2520M:

real    172m2.494s
user    619m51.494s
sys     1m37.383s

Final result

You can see the resulting videos on youtube: Part 1, Part 2, and Part 3. Part 3 has some bad sound. I'm not sure why.

This blog is powered by ikiwiki.