Using Metadoctor on HP/Palm WebOS Devices

My Devices

WebOS Devices closed

WebOS Devices open

I decided to update the 3/4 of my Palm WebOS devices. The Pre (not pictured) and Pre2 (middle) were my primary, daily-driver phones for over two years, from September 2009 through to spring 2012, when I acquired a Galaxy Nexus and made the jump to Android.

The Pre3 (right) I also picked up on eBay. It came in box, with all accessories, and a spare battery.

The Pixi (left) I picked up cheap on eBay. "For Parts", because it "Does not move from activation screen". Now, the Pixi was a low-end device when it launched. It doesn't even have WiFi. This isn't a device you'd buy to use. But as a curiosity, I was interested. Due to the Palm servers being taken offline in January 2015, the activation process will never complete. Interestingly, this "Sprint" Pixi has a "Verizon" faceplate, possibly swapped from a Pixi Plus at some point. Model number confirms it is a Sprint device..

Pixi in Activation Loop

The Pre & Pixi are CDMA phones, locked to Bell Mobility and Sprint respectively. So they're effectively useless.

The Pre2 & Pre3 are HSPA+ phones, and I should be able to use them on my current provider, assuming I can locate a SIM adapter.

MetaDoctor

Following the MetaDoctor Wiki is fairly straight forward.

Note:

  • "WebOS Doctor" is Palm's official firmware update program + firmware images

  • "MetaDoctor" is a community-driven makefile that alters the above "WebOS Doctor" images with user-chosen modifications.

Fetching Firmware

After fetching metadoctor from github, you're required to fetch the WebOS firmware images from Palm. Palm, of course, doesn't exist currently. Thankfully, you can get all WebOS Doctor images via archive.org.

You'll have to rename the files after download to append the WebOS version to the filename. In my case:

# Bell Pre
mv webosdoctorp100ewwbellmo.jar webosdoctorp100ewwbellmo-1.4.5.jar 

# Sprint Pixi
mv webosdoctorp200ewwsprint.jar webosdoctorp200ewwsprint-1.4.5.jar 

# Unlocked Pre2
mv webosdoctorp224pre2wr.jar    webosdoctorp224pre2wr-2.2.4.jar 

# AT&T Pre3
mv webosdoctorp224mantaatt.jar  webosdoctorp224mantaatt-2.2.4.jar 

Build Modified Firmware

An important resource is the MetaDoctor README file. It outlines all the options available. Myself, I just want to a few options. Instructions say to modify the Makefile, but passing args works just fine. These make commands will output a few assembled firmware images to the build directory.

$ make DEVICE=pre  CARRIER=bellmo BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 all
$ make DEVICE=pixi CARRIER=sprint BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 all
$ make DEVICE=pre2 CARRIER=wr BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 ADD_EXTRA_CARRIERS=1 all
$ make DEVICE=pre3 CARRIER=att BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 ADD_EXTRA_CARRIERS=1 all

Note that I'm using CARRIER=att for the Pre3, due to this being an AT&T phone. However, ADD_EXTRA_CARRIERS=1 should provide whatever additional data (APN?) for it to work on my network. The Pre2 uses CARRIER=wr. There is no "rogers" carrier, but it is on the wiki. However, it links to the webosdoctorp224pre2*wr*.jar file.

Install Firmware

Interestingly, WebOS Doctor images are more than just a firmware image, like you'd have with Android (or other devices). They're actually executable java bundles, which will push the encased firmware to the device. So you execute it on your computer. On the bright-side, they're Java, so they should work on any platform. On the super-bright-side, they actually work just fine with the OpenJDK installed in Fedora. No need to tarnish your system with Sun/Oracle Java.

$ java -jar build/pixi-p200eww-sprint-1.4.5/webosdoctorp200ewwsprint-1.4.5.jar

Now, the down-side (of course there was one). There seems to be some trouble actually finding the USB device. It seems that it uses platform-specific builds of novacom. Novacom is sort of like both fastboot and adb utilities in the Android world. Apparently there are libusb woes here. There are instructions on chasing down old libraries to resolve this issue...

Luckily, none of that matters on Fedora 24, since novacom is actually packaged.

$ sudo dnf install novacom
Last metadata expiration check: 0:52:44 ago on Wed Sep 14 22:24:59 2016.
Dependencies resolved.
=================================================================================
 Package          Arch     Version                              Repository   Size
=================================================================================
Installing:
 novacom          x86_64   1.1.0-0.11.rc1.git.ff7641193a.fc24   fedora       10 k
 novacom-client   x86_64   1.1.0-0.11.rc1.git.ff7641193a.fc24   fedora       24 k
 novacom-server   x86_64   1.1.0-0.14.rc1.fc24                  fedora       52 k

Transaction Summary
=================================================================================
Install  3 Packages

Total download size: 86 k
Installed size: 131 k

You'll need to run it as root to allow direct access to USB devices (You can probably configure some udev permission rules, but I won't be doing this often enough to bother). You can instruct it to fork, but I preferred opening a second terminal, so I could the process when I'm done.

$ sudo novacomd

Update: A systemd unit is actually installed. sudo systemctl start novacomd would be the correct method to start the service

You should see a few lines of output indicating a device was found.

Back in our metadoctor terminal, run the WebOS Doctor firmware we assembled earlier:

$ java -jar build/pixi-p200eww-sprint-1.4.5/webosdoctorp200ewwsprint-1.4.5.jar

A few clicks of "Next" should get your firmware flowing. Note that one of the 'next' buttons will be disabled if your device isn't found.

WebOS Doctor

Devices stuck in Activation

As mentioned above, the Pixi was stuck in perpetual activation. You'll need to force it into a mode that will accept firmware. This is sort of like bootloader mode on Android. This process should be identical for all Palm WebOS hardware.

  • Turn the device off
  • Hold
  • Either press , or Plug in USB (both turn on the phone)

The updater should see the device and enable you to load the firmware.

After a successful update, the initial boot screen will be a picture of Tux with a Palm Pre.

Pixi Tux

End of Pixi fun

After booting up the Pixi, I noticed it immediately started in "International Roaming" mode, notified of pending voice mail, and has a phone number assigned to it.

I've immediately placed it into airplane mode, then yanked the battery. Since this device lacks Wifi, there is very little reason to look at it again.

WebOS 2 on Pre

There is a process to install WebOS 2 onto WebOS 1 devices, like the Pre and Pixi. I actually used WebOS 2 on my Pre when it was my primary device back in 2011. It added some notable features, such as voice dialing (handy with bluetooth). It was perfectly functional, though a tad sluggish at times (it was targeting better hardware).

That said, backporting WebOS 2 to these devices is an interesting exercise. There is a WebOS 2 upgrade procedure, which involves running the appropriate script from within the metadoctor directory:

$ scripts/meta-bellmo-pre-2.1.0

The script will list the multiple WebOS Doctor images required at the top. You'll need to preemptively fetch these, as the URLs are bad (pointing to offline Palm servers, again). Note that they use a slightly different naming scheme (they do not get the version appended to the end).

The firmware install procedure is the same as above.

Posted
WebOS in 2016

Note: Yes, WebOS Technically still exists. However, this article isn't talking about the TV OS version made by LG.

While this article was actually written to discuss getting Palm devices usable, I felt some preamble was necessary.

My Palm History

I've got a soft spot for Palm. My first PDA was the Palm Vx, possibly the greatest PDA ever made. Easily get days of battery from a device that can store all your calendar and contacts information, synchronizing periodically with your master copy on your computer.

Palm lost its way with later versions of PalmOS. Devices got more spec competitive, but batteries didn't. My Tungsten T3 while technically superior in every spec, was actually less capable of performing it's primary PIM tasks than the Vx before it.

We'll just skip over the dark years:

  • Software and Hardware divisions split
  • Software eventually dead-ends in development hell (taking the remains of BeOS with it)
  • Windows Mobile on Palm Hardware

Eventually Palm decided to get their act together.

Hello WebOS

Palm's WebOS showing blew me away. I wanted one so much that when they finally released in Canada, I walked into a Bell store in September 2009 and signed a 3-year contract, despite being laid off work only a week earlier.

The biggest complaint was the Palm Prē launch hardware. While very comparable spec-for-spec with the iPhone 3G and 3GS, WebOS featured multitasking with multiple running applications. Apple didn't. End result was the Prē was often laggy and slow. It was unfortunate.

Palm quickly released an updated Prē Plus in May 2010 with twice the memory. Eventually, the Prē2 (released October 2010) would reveal what a WebOS device should be like. I actually had one shipped up from the United States (since Bell had started converting from CDMA to HSPA+, just in time).

HP's wallet to the rescue

However, Palm was struggling financially. Before the Prē2 was released, HP bought Palm. This was a good thing: Palm will finally have the financial backing to take on Apple and Android platforms.

Looking forward, the Prē3 was on the horizon for 2011. Featuring a major CPU upgrade, a significantly higher-res screen, and a refreshed design. Launching alongside the TouchPad tablet, there were some amazing demos. Tapping content between devices was mind-blowing at the time.

Anticipation was high. Months passed.

Finally, the Prē3 launched in Europe on August 16, 2011. "US is coming up in the near future". Two days later, on August 18, 2011, HP kills Palm.

There was hope that the OS would continue, other hardware vendors might pop-up. Unfortunately, none of that happened (LG smart TVs aside).

Palm Hardware in 2016

Due to the very open nature of WebOS (every device being a development device after a quick, official procedure) there was already a wide community modifying WebOS.

It was due to the efforts of these people that WebOS 2 was backported to the unsupported Prē (including features like voice dialing, etc).

The homebrew scene was particularly active with the "Preware" package manager, and alternative to the official App Catalog. Additionally, the nature of an OS built on HTML and Javascript opened the doors to a wide variety of patches and modifications.

WebOS MetaDoctor allowed users to spin a new, customized WebOS image with pre-applied applications, patches, and features (including massaging hardware support). Then was then flashed to the phone for a customized user experience.

In late 2014, HP announced they were shutting down the authentication servers. Considering a Palm account was a required account for device activation, this had the potential to be the final nail in the WebOS coffin if it wasn't for MetaDoctor allowing you to skip the Palm account step.

WebOS in 2016

Luckily, HP decided to Open WebOS, as well as the Enyo app framework. This allowed others to continue development of the platform.

LuneOS was born, and began modernizing. While this unfortunately means that it can't run on the old Palm hardware, it does run on (slightly) newer Nexus hardware (Galaxy Nexus and Nexus 4).

Followups to come

Stay tuned for articles on both Meta-doctoring Palm hardware, as well as running LuneOS on a Nexus 4.

Posted
Snapperd on Fedora with SELinux enabled

Snapper is an excellent utility that provides hourly snapshots of btrfs subvolumes.

Fedora ships with selinux enabled by default. This is excellent, and shouldn't be disabled. To allow this, most software in Fedora has appropriate rules defined, including snapper.

However, snappers rules only allow it to work on / and /home. If you wish to use it to snapshot /mnt/data, or /srv, or any other particular path, you're going to have a very bad time.

While it is certainly possible to define new rules for paths you wish to back up, I decided that in this one particular case, snapper should be allowed free reign.

sudo semanage permissive -a snapperd_t

The above command tells selinux to treat snapperd_t (the context snapperd runs within) as permissive. Rule violations will still be logged, but snapper will be allowed to continue.

ffmpeg part two - Electric Boogaloo

I just attended the Watkins Glen opening day for the second year. It was, again, a blast.

I made some slight adjustments to my ffmpeg assembly procedure from last year.

Dashcam saves video in 5-minute chunks

Instead of creating .list files, I simply used a pipe as input:

for fo in AMBA091*; do echo file "$fo"; done \
    | ffmpeg -f concat -i - -c copy Front-Track1.mov

Front and Rear videos need to be combined

Much like last year, I made short samples to confirm if any offsets needed to be done. However, I decided to move the video to the bottom-right corner to cover the timestamps, since they were incorrect on some videos (well, correct. Just not for this time zone)

Math is basically the same as before for scaling, but instead of a left offset of 70, we want a right offset of 70, but using left-hand coords. Which works out to:

1920 - 480 - 70 = 1370

After the usual synchronization samples were made, it was time to perform the final assembly.

I used a slightly different file layout this time, keeping the front and rear videos separated. I used a loop to assemble them into a combined video:

$ time for FILE in Track{1,2,3}; do ffmpeg -i Front/DCIM/*${FILE}*mov -vf "movie=Rear/DCIM/Rear-${FILE}-cat.mov, scale=480:-1 [inner]; [in][inner] overlay=1370:740 [out]" -strict -2 ${FILE}.mov; done

You're going to want a good CPU. This is the execution time for just under 48 minutes of video on an Intel i5-2520M:

real    172m2.494s
user    619m51.494s
sys     1m37.383s

Final result

You can see the resulting videos on youtube: Part 1, Part 2, and Part 3. Part 3 has some bad sound. I'm not sure why.

Intel GPU Scaling mode

I was attempting to run my laptop at a lower resolution than the laptop panel. However, by default the video is scaled to fill the panel. This causes the image to be distorted (fonts look bad, etc).

On Linux (with Xorg, anyway), this behaviour can be tweaked with xrandr:

$ xrandr --output LVDS1 --set "scaling mode" "Center"

This is not a persistent setting, which is fine for my purposes.

Thanks to the Arch Linux Wiki article on Intel Graphics for documenting this.

My failed experiment with CalDAV/CardDAV

In an ongoing quest to attempt to lessen my Google dependancy, I decided to self-host my Calendar and Contacts using Baïkal.

Installing and configuring Baïkal is sufficiently documented elsewhere. This post is a 9somewhat short) account of why I'm giving up on self-hosted contacts and calendars.

Google

The problems can be summed up into these bullet points:

  • It is assumed (and practically required) to use Google Play Store
  • Google Play Store requires a Google Account
  • Google Account means you have Mail, Calendar, and Contacts

Simply adding your google account into your phone causes Mail, Calendar and Contacts to sync. Mail you can disable, and use an alternate client, as that data is housed internally to the gmail app, and not exposed system wide for other apps to use.

Calendars and Contacts, even if you disable sync, are still "there". Some apps might add events to the "first" calendar, without asking (which may or may not be synced to Google, and not your self-hosted calendar). Updating contacts sometimes adds those updates to the Google Contacts list. There is no apparent way to move items from one contacts/calendar account to another.

Summary

As it stands now, you can either have:

  • self-hosted contacts and calendars with probably most of your data, and understand that you will miss events, and people.

  • Google Contacts and Calendar with all of your data.

As much as I preferred self-hosting, it simply isn't practically possible until you can completely remove Google Contacts/Calendars from your device, and the management apps provide the ability to move events.

AWStats from multiple hosts

I decided I wanted some stats. There are a few options: Use a service (Google Analytics, etc) or parse your logs. Both have pros and cons. This article isn't supposed to help you decide.

I just wanted simple stats based on logs: It's non-intrusive to visitors, doesn't send their browsing habits to third parties (other than what they send themselves), and uses the apache log data I've already got for the entire year.

I'm mainly interested in seeing how many people actually read these articles, as well as what search terms referred them here.

Fix your logs

I've got seven virtualhosts spread across four virtual machines. My first problem, is all were using /var/log/httpd/access_log for logging. A lot of grep work, and I managed to split those out to individual access logs: /var/log/httpd/access_log.chrisirwin.ca, for example.

My biggest problem is a lot of log enteries didn't actually indicate which virtualhost they were from. I ended up spending a few hours coming up with a bunch of rules to identify all queries for my non-main virtualhosts (yay static files). Then I dumped anything that didnt' match those rules into my main virtualhost's log (including all the generic GET / entries.

All my logs are sorted into per-virtualhost logs, and all lines from the original are accounted for.

I renamed access_log to access_log.old, just so I don't mistakenly review it's data again.

Fix your logging

Now that we've got separate access logs, we need to tell our virtualhosts to use them. In each virtualhost I added new CustomLog and ErrorLog definitions, using the domain name of the virtualhost.

CustomLog       "logs/access_log.chrisirwin.ca" combined
ErrorLog        "logs/error_log.chrisirwin.ca"

Then restart httpd

$ sudo systemctl restart httpd

I also disabled logrotate, and un-rotated my logs with zcat. I'll probably need to revisit this in the future, but 1 year worth of logs is only 55MB.

Fetch logs

It goes without saying that awstats needs to be local to the logs. I have four virtual machines. Do I want to manage awstats on all of them? No.

So I wrote a bash script to pull in my logs to a local directory:

$ cat /opt/logs/update-logs 
#!/bin/bash

cd $(dirname $(readlink -f $0))

# Standard apache/httpd hosts
for host in chrisirwin.ca web.chrisirwin.ca; do
    mkdir -p $host
    rsync -avz $host:/var/log/httpd/*log* $host/
done

# Gitlab omnibus package is weird
host=gitlab.chrisirwin.ca
mkdir -p $host
rsync -avz $host:/var/log/gitlab/nginx/*log* $host/

Now I have a log store with a directory per server, and logs per virtualhost within them.

Configure cron + ssh-keys to acquire that data, or run it manually whenever.

Install awstats

Then I picked my internal web host, and installed awstats. This is in Fedora 22, but requires you to enable epel for CentOS/RHEL.

$ sudo dnf install awstats

And, uh, restart apache again

$ sudo systemctl restart httpd

Configure awstats

Now go to /etc/awstats, and make a copy of the config for each domain:

$ sudo cp awstats.model.conf awstats.chrisirwin.ca.conf

You'll probably want to read through all the options, but here's all the values I modified:

LogFile="/opt/logs/chrisirwin.ca/access_log.chrisirwin.ca"
SiteDomain="chrisirwin.ca"
HostAliases="REGEX[^.*chrisirwin\.ca$]"
# DNSLookups is going to make log parsing take a *very* long time.
DNSLookup=1
# My site is entirely https, so tell awstats that
UseHTTPSLinkForUrl="/"

Run the load script

Let's just piggy-back on provided functionality:

$ time sudo /etc/cron.hourly/awstats

Mine took >15 minutes. I think it was primarily DNS related.

Review your logs

By default, awstats figures out what config to use based on the domain name in the URL. However, I've aggregated my logs to a single location. Luckily, awstats developers though of this, and you can pass along a an alternate config in the url:

https://internal.chrisirwin.ca/awstats/awstats.pl?config=chrisirwin.ca

Tweaks to awconfig

Unless you're running awstats on your localhost, you'll be denied access. You'll likely have to edit /etc/httpd/conf.d/awstats.conf and add Require ip 10.10.10.10/16, or whatever your local ip range is. Note that while you can add hostnames instead of IPs, reverse DNS needs to be configured.

While there, you could also add DirectoryIndex awstats.pl.

Discard (TRIM) with KVM Virtual Machines

I've got a bunch of KVM virtual machines running at home. They all use sparse qcow2 files as storage, which is nice and space efficient -- at least at the beginning.

Over time, as updates are installed, temp files are written and deleted, and data moves around, the qcow2 files slowly expand. We're not talking about a massive amount of storage, but it would be nice to re-sparsify those images.

In the past, I've made a big empty file with dd and /dev/zero, delete it, then fallocate on the host to punch the detected holes. However, this is cumbersome.

As it turns out, there is a better way: discard. Discard support was initially added to tell SSDs what data can be cleaned and re-used (SSDs call it 'TRIM'), to preserve performance and extend drive lifetime (allowing better wear levelling). The same methods can also be used to allow a VM to tell it's host what part of it's storage is no longer required. This allows the host to actually regain free space when guest machines do.

I used the following two pages as references. The first is more generically useful for machines with actual SSDs, as well as checking trim works through multiple storage layers (dm, lvm, etc).

Fix fstab

Ensure you're not using any device paths in fstab, like /dev/sda1 or /dev/vda1. These steps may renumber or rename your hard disks, and you don't want to troubleshoot boot problems later on. Switch them to LABEL or UUID entries, depending on your preference/use-case.

This also means fixing your initrd and grub, if necessary. Most installs shouldn't require that, though. Typically, it's just lazy manually-added filesystems :)

It goes without saying, but reboot your VMs now to ensure they boot after your changes. That will make troubleshooting easier later.

Shutdown VMs and libvirtd

Since I'm doing some manual munging of the VM definition files, first step is to shut down all VMs and stop libvirtd.

Update machine type

Some of my VMs were quite old, and were using old machine versions, as evidenced by one of the .xml files in /etc/libvirt/qemu:

<type arch='x86_64' machine='pc-i440fx-1.6'>hvm</type>

From what I understand, machine types later than 2.1 include discard support. I wanted to update everything to the current 2.3 machine type:

sed -e "s/pc-i440fx-.../pc-i440fx-2.3/" -i *.xml

Add discard support to hard disks

Your sed line will vary here. I've manually specified writeback caching, so my hard drive driver line looks like the following:

<driver name='qemu' type='qcow2' cache='writeback'/>

It was fairly simple to add discard:

sed -e "s#writeback'/>#writeback' discard='unmap'/>#" -i *.xml

It should now look like this:

<driver name='qemu' type='qcow2' cache='writeback' discard='unmap'/>

You could probably key off the qcow2 bit instead of the writeback bit. The order doesn't matter.

Change each hard drive from virtio to scsi bus

All of my VMs were using virtio disks. However, they don't pass discard through. However, the virtio-scsi controller does.

There is probably a pretty easy way to do this with virsh, but I opted to just use virt-manager, since I have a finite number of VMs (and reading the man page for virsh would take longer than just doing it with virt-manager).

Change Disk Bus to SCSI:

Disk Bus assignment

Change SCSI Controller to VirtIO SCSI:

SCSI Controller Model

The latter step might not be required. The only other option is "hypervisor default", so it might just use virtio-scsi by default. Better safe than sorry.

Boot and check your VMs

After starting your VMs, you should be able to confirm that discard support is enabled:

sudo lsblk -o MOUNTPOINT,DISC-MAX,FSTYPE

If you see 0B under DISC-MAX, then something didn't work:

MOUNTPOINT               DISC-MAX FSTYPE
/                              0B ext4

However, if you see an actual size, then congrats. You support discard:

MOUNTPOINT               DISC-MAX FSTYPE
/                              1G ext4

Configure your VMs themselves to discard unused data

Manually run an fstrim to discard all the currently unused crufty storage you've collected on all applicable filesystems:

sudo fstrim -a

Going forward, you can either add 'discard' to the mount options in fstab, or use fstrim periodically. I opted for fstrim, as it has a systemd timer unit that can be scheduled:

sudo systemctl enable fstrim.timer
sudo systemctl start fstrim.timer

Done! Or am I...

Now, there are additional considerations to be made during backup.

For example, if you use rsync, you'll probably want to add --sparse as an option, so it doesn't inflate your backup copy to full size. However, that won't actually punch holes that have been discarded since the last backup. So you still need to use fallocate on your backup copies to actually reclaim discarded space.

Another pain is I back up to a btrfs filesystem, which uses snapper to preserve previous revisions. This should be a great solution, however, there are other considerations:

  • rsync's default behaviour is to do all work in a copy, then replace the original. As far as btrfs is concerned, this is an entirely new data, and doesn't share anything with existing snapshots. That means btrfs snapshots are quite bloated.
  • need to use --inplace to avoid above snapshot bloat.
  • --inplace and --sparse are mutually-exclusive. Well shit.

My current solution is to use --inplace for backups, then fallocate all files. I try to manually rsync --sparse new VMs ahead of their initial backup to avoid the temporary inflation that --inplace would cause.

Multiple Instances of Gnome Terminal

Gnome 3 introduced a very handy feature, grouping multiple application windows (whether they be separate instances or not) into a single desktop icon. This means when <alt+Tab>ing through your windows, you can skip over the dozen firefox windows, then dive into just your terminal windows. Generally, this works great, and I think most users don't have any issues.

However, some people (myself included) use a lot of terminals. Some are temporary short-lived generic terminals. Others are long-lived running mail (mutt), or a main development session. Unfortunately, trying to switch to my email terminal can be cumbersome as I squint at thumbnails of 10+ other terminals.

Luckily, the mechanisms to control this are somewhat accessible. Gnome matches a window to .desktop file based on some Window properties. All like matches are grouped together. Luckily, it seems GTK+ exposes functionality to modify these values at the command line. I've latched on to window class (WM_CLASS), but there are likely other rules that guide a match.

WM_CLASS seems to be set by some combination of both the --class and --name GTK+ flags. The first step is to see if I can make an appropriate change. In gnome-terminal, I ran the following:

yelp --class=foo --name=foo

Bingo! I got a standard yelp window, but it identified itself to gnome as foo in the title bar, <alt+tab>, and dock! It didn't get grouped with the yelp shortcut!

Lets do the same with gnome-terminal:

gnome-terminal --class=foo --name=foo

No luck. I got a new window, but it was grouped with my existing one. The above works if this is the first instance, however, so gnome-terminal is certainly capable of changing it's window properties.

Turns out, gnome-terminal is being clever. It's actually running a background process, gnome-terminal-server, that owns all windows. Running an additional gnome-terminal simply pokes the existing process to create a new child window.

In gnome 3.6 and earlier, this was easy enough to solve. Adding --disable-factory would disable this functionality. However, since gnome 3.8, it's now using some sort of dbus activation. I had given up on gnome-terminal.

I tried using alternative terminals. Ultimately, I was frustrated that they didn't pick up my theme correctly (GTK+-2.0 based), or were weird (terminator), or that I the fonts just looked better in gnome-terminal (xterm, etc), or needed a universe of dependancies (konsole)

Finally, I came across a lead on Stack Exchange: Run true multiple process instances of gnome-terminal. This lead me to a page on Gnome's Wiki on Running a separate instance of gnome-terminal.

This workaround involves manually starting a new terminal server.

/usr/libexec/gnome-terminal-server --app-id com.example.terminal --name=foo --class=foo &

gnome-terminal --app-id com.example.terminal

Aha, so it is possible to get my desired functionality. However, while this works, it isn't ideal. This requires running two commands, rather than just one. Additionally, the server dies after 10 seconds if no clients connect, preventing me from spawning it at login.

But this manual server initialization isn't required with the standard backend. So how does that work?

Turns out, it's a dbus service definition. You can review the current one /usr/share/dbus-1/services/org.gnome.Terminal.service, then make our own.

I've decided to call my session 'PIM', as I'm using it for my mail/calendar terminals

cat /usr/share/dbus-1/services/org.gnome.Terminal-PIM.service
[D-BUS Service]
Name=org.gnome.Terminal-PIM
Exec=/usr/libexec/gnome-terminal-server --class=org.gnome.Terminal-PIM --app-id org.gnome.Terminal-PIM

Now, I've also created the associated (and like-named) .desktop file (used the /usr/share/applications/org.gnome.Terminal.desktop as a template)

$ cat ${HOME}/.local/share/applications/org.gnome.Terminal-PIM.desktop 
[Desktop Entry]
Name=Mail & PIM
Comment=Mutt and Calendar
Keywords=mail;mutt;calendar
Exec=gnome-terminal --app-id org.gnome.Terminal-PIM -e "screen -DR PIM -c .screenrc-ejpim"
Icon=mail_send
Type=Application
StartupNotify=true
X-GNOME-SingleWindow=false

Note: I'm actually launching it straight into a pre-defined screen session, but you could change the -e parameter to whatever you wish.

Both dbus and gnome-shell need to be restarted to pick up their changes. You can tell gnome-shell to restart itself, but the surest method I'm aware of for dbus is to log out and in again.

Now I can run the "Mail & PIM" shortcut, and get a gnome-terminal window that is grouped separately (with a mail icon!).

At this point, it would be work investigating gnome-terminal profiles, if desired (different colour schemes, etc).

Unfortunately, while I put my .desktop file in ~/.local/share/applications, I couldn't find a user-specific dbus servicedir. I had to install that in the system itself with sudo. I'd prefer to have it local to my user, so I can move it easily to other systems with my existing vcsh configuration.

Update 2016-10-12

Recently there was some sort of change that caused gnome-terminal-server to run via systemd service file in the users session. This means a slight change is required to the steps above.

The dbus service file should now read:

cat /usr/share/dbus-1/services/org.gnome.Terminal-PIM.service
[D-BUS Service]
Name=org.gnome.Terminal-PIM
SystemdService=gnome-terminal-server-pim.service
Exec=/usr/libexec/gnome-terminal-server --class=org.gnome.Terminal-PIM --app-id org.gnome.Terminal-PIM

Additionally, you'll need to create a systemd unit file:

cat /usr/lib/systemd/user/gnome-terminal-server-pim.service 
[Unit]
Description=GNOME PIM Terminal Server
[Service]
KillMode=process
Type=dbus
BusName=org.gnome.Terminal-PIM
ExecStart=/usr/libexec/gnome-terminal-server --class=org.gnome.Terminal-PIM --app-id org.gnome.Terminal-PIM

You'll need to systemd --user daemon-reload for the systemd changes to take effect. You may still need to log out for the dbus change.

Posted
Video assembly with ffmpeg

I recently took my car to a racetrack, covered with cameras. I wanted to post these on youtube, but encountered a few issues:

  1. Dashcam saves video in 5-minute chunks

  2. Front and Rear videos need to be combined

  3. I don't know anything about video editing

  4. I didn't have a working video editor

  5. Fedora doesn't seem to ship ffmpeg, and rpmfusion doesn't support Fedora 22 yet

The last point was somewhat resolved by a binary build of ffmpeg.

Dashcam saves video in 5-minute chunks

There is no gap or overlap with the mini0805, which makes clips easy to combine with ffmpeg's concat filter.

I used an input file to list all the components:

$ cat Front-Run2.list

file 'Front/DCIM/100MEDIA/AMBA1299.MOV'
file 'Front/DCIM/100MEDIA/AMBA1300.MOV'
file 'Front/DCIM/100MEDIA/AMBA1301.MOV'
file 'Front/DCIM/100MEDIA/AMBA1302.MOV'
file 'Front/DCIM/100MEDIA/AMBA1303.MOV' 

Then provide that to ffmpeg:

$ ffmpeg -f concat -i Front-Run2.list -c copy Front-Run2.mov

Using the 'copy' codec avoids re-encoding the video, which makes this a quick operation.

This is likely a good time to trim the start and end of the video. I did this on the compiled version:

$ ffmpeg -i Front-Run2.mov -ss 0:45 -t 12:30 -c copy Front-Run2.trim.mov

After reviewing that output, I replace the previous version:

$ mv Front-Run2.trim.mov Front-Run2.mov

Again, the copy filter makes this a quick operation. -ss is the start offset (so the new video will start 0:45 seconds in). -t however is the duration, so it will end at the 13:15 mark of the original video. There is also a to option to set a stop time, but I missed that in the man page until writing this article.

This step was needed for all the videos I wanted to use.

Front and Rear videos need to be combined

This step unfortunately involved a small amount of trial-and-error.

First, I wanted to overlay the rear camera footage in the bottom corner of the front camera. This required some additional information:

  • I'm combining Front-Run2.mov and Rear-Run2.mov
  • Videos are 1920x1080. I want the rear video at 25%. That's 480x270
  • I want the overlay to be in from the edge. Lets say by 70 pixels.
  • There may be a better overlay method, but this works.

Now, in a perfect world, you can just compile the "finished" version. However, both of my dash cams start recording with a variance of 1-3 seconds, so the videos don't line up exactly (even though I used the same trimming). My video has my car start moving at 1:25, so I made a 30 second video starting at 1:15.

Note that we can't use the 'copy' codec, as we're actually modifying the video at this point. This makes the 30 second clips a massive time saver.

$ ffmpeg -i Front-Run2.mov -vf "movie=Rear-Run2.mov, scale=480:-1 [inner]; [in][inner] overlay=70:740 [out]" -ss 1:15 -t 30 Sample-Run2.mov

I then watched the video to time the difference between when each car started moving. I found the rear video was about 1 second behind. I trimmed it using -ss 0:01, then recreated my 30 second sample. Once I was satisfied, I generated the entire video.

$ ffmpeg -i Front-Run2.mov -vf "movie=Rear-Run2.mov, scale=480:-1 [inner]; [in][inner] overlay=70:740 [out]" Combined-Run2.mov

Slight sound censoring

I was discussing a few things I'd rather not have in the video. This means I needed to mute three sections of the audio.

I watched the video, taking note of the ranges I wanted to mute:

  • 7:14-7:19
  • 8:47-9:22
  • 15:02-15:34

Unfortunately, the volume filter I was using seemed to only take seconds... So:

  • 434-439
  • 526-562
  • 902-934

Now that I'll be building my "final" video, I converted it to a "faststart" mp4 based on youtube's video recommendations. Also, we're not modifying the video, so we can use -vcodec copy. The audio will be re-encoded, due to the filters.

$ ffmpeg -i Combined-Run2.mov -af "volume=enable='between(t,434,439)':volume=0, volume=enable='between(t,526,562)':volume=0, volume=enable='between(t,902,934)':volume=0" -vcodec copy -movflags faststart Final-Run2.mp4

Final result

You can see the resulting video on youtube.

Outstanding issues

So all videos were 1920x1080, but youtube only offers "720p". I don't know why.

This blog is powered by ikiwiki.