Highlight Reel

Goal

I want to combine a few clips together, with a 5-seconds of intro text on each one.

Create overlay text in GIMP

I created some overlay text in gimp, then exported to png files. An example (Note the transparancy, and drop shadow):

sample overlay png

Trim clips to length

Using the methods I've described in previous ffmpeg posts, I trimmed the clips, ensuring that there is at least five seconds of lead-in on each clip for the text.

ffmpeg -i 1-Tire-Squeal-Front.MOV -ss 1:01 -to 1:24 -c copy 1-Tire-Squeal-Front.trim.MOV

Overlay text on video clips

Then I overlayed the PNG on top of the video for five seconds, thanks to Google leading me to mark4o on Super User:

ffmpeg -i 1-Tire-Squeal-Front.trim.MOV -loop 1 -i 1-Tire-Squeal-Front.png -filter_complex "[1:v] fade=out:st=5:d=1:alpha=1 [ov]; [0:v][ov] overlay=0:0 [v]" -map "[v]" -map 0:a -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart -to 0:23 1-Tire-Squeal-Front.title.mp4

I basically just used his example, except I changed the duration, and offset since my title png is already 1920x1080. Also, all that youtube codec stuff.

I did have trouble with the last few seconds of video being clipped with -shortest. I ended up specifying the appropriate -to length.

Overlay trailing text on video clip

I wanted an ending text on the video, but was having trouble getting two clips to match exactly (and doing the video like the above). Then I realized I could just use a slightly different filter:

ffmpeg -i 3-Bob.trim.MOV -loop 1 -i 3-Bob.png -loop 1 -i 4-Oh.png -filter_complex "[1:v] fade=out:st=5:d=1:alpha=1 [ov]; [2:v] fade=in:st=16:d=1:alpha=1 [oe]; [0:v][ov] overlay=0:0 [v]; [v][oe] overlay=0:0 [vf]" -map "[vf]" -map 0:a -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart -to 21 4-Bob.title.mp4

This creates a third input stream, which is also a looped image. We create stream [oe] (overlay end), which does a fade-in at 16 seconds. We then take [v] and overlay [oe] on it, creating [vf]. We then map [vf] to our output.

Done

The video is what I set out to make. It might have been nicer to do some audio fades, but I'll live.

Next time

Now that I'm done, I think it would have been a better idea to have the overlays fade in and out, as well as figure out better scene transitions. Next time...

ZigZag Volvo

While driving through New York on my way to the Watkins Glen 2017 opening weekend, I encountered a nut in a Volvo zig-zagging through highway traffic. I decided use ffmpeg yet again (I'm starting to wonder if OpenShot would actually bring anything to the table at this point, besides crashing)

Plan

I want a video with the rear camera until the Volvo passes, then the front camera after. I want to use the front audio for the whole video.

Sync

Luckily, the front & rear videos were synced to within a fraction of a second. Close enough.

Trim

Rear video, starting at 2:40, ending at 4:24, when the car passes me.

ffmpeg -i ../Speedy_Gonzales-Rear.MOV -ss 2:40 -to 4:24 -c copy Rear.trim.MOV

Front video, starting at 2:40, but ending at 4:57, when I want the video to end

ffmpeg -i ../Speedy_Gonzales-Front.MOV -ss 2:40 -to 4:57 -c copy Front.trim.MOV

Combine

This was pretty easy. I used an overlay as usual, but didn't perform the scaling. I also instructed the overlay filter to continue with the original video when complete:

time ffmpeg -i Front.trim.1080.MOV -i Rear.trim.MOV -filter_complex "overlay=0:0:pass" -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart Combined-FrontAudio.mp4

You'll note I'm using the same ffmpeg voodoo I used last time to help youtube processing time.

I was expecting to have to remove the rear audio, but interestingly enough, it was gone already. I suspect the overlay filter killed it? No idea, actually.

Crop

Except not so fast! I had acidentally recorded both videos at different resolutions. The rear was doing 1920x1080, while the front was 2560x1080. This causes weird video like this:

htop screenshot

Additionally, while 2560x1080 looks great to me (I have an ultrawide monitor), it letterboxes on a standard 1920x1080 display. Additionally, youtube adds that letterboxing during their processing, so watching my own 2560 video on youtube gets black bars on all four sides. Lovely. So I need to crop the video as well:

time ffmpeg -i Front.trim.MOV -i Rear.trim.MOV -filter_complex "overlay=0:0:pass,crop=1920:1080:0:0" -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart -to 10 Combined-FrontAudio.mp4

I'm fine with keeping the left of the front frame, as it looks "good enough". If I wasn't, I'd have had to have adjusted the overlay offset, as well as the crop offset.

Just as a note, I fixed the resolution (and removed the datestamp from the rear camera) before I hit the track on Saturday. So I didn't have to the cropping for the videos I put on youtube earlier (but filmed chronologically after).

CPU takes it easy

My CPU seems to be taking it easy today. Maybe its upset I made it work on Easter? Or this task just didn't parallelize as well.

htop screenshot

Done

The finished video turned out as well as I had hoped. I left in the first 90 seconds (despite mostly being a Pringles-related discussion) because, like any traffic video online, there often isn't enough context. Also, it's always the cammer's fault, so I guess more incriminating evidence...

ffmpeg part three - No more Boogaloos to give

Just like the first two times, I'm assembling my Watkins Glen 2017 track footage with ffmpeg.

However, I encountered a small issue I didn't last year, plus I decided to change things up a bit with codecs, and audio selection.

ffmpeg requires protocol whitelist now

I'm using pretty much the same concatenation command as last year (filenames are a bit different):

$ for f in Front-*MOV; do echo file "$f"; done | ffmpeg -f concat -i - -c copy Front.MOV

The error I got from ffmpeg looked like:

[file @ 0x55d7ded86680] Protocol not on whitelist 'crypto'!
[concat @ 0x55d7ded7cc80] Impossible to open 'Front-AMBA0009.MOV'
file:: Invalid argument

I used my google-fu and found a helpful blog that pointed me at the cause. This is actually a pretty decent security feature, and should prevent ffmpeg from reaching out to the world without your knowledge. However, it appears the default behaviour (at least from rpmfusion) differs from the man page:

protocol_whitelist list (input)
    Set a ","-separated list of allowed protocols. "ALL" matches all
    protocols. Protocols prefixed by "-" are disabled.  All protocols
    are allowed by default but protocols used by an another protocol
    (nested protocols) are restricted to a per protocol subset.

Unless file was disallowed becayse it was used via pipe. Anyway, solution was simple enough: Whitelist 'file', and 'pipe' (we're piping in the file list).

$ for f in Front-*MOV; do echo file "$f"; done | ffmpeg -protocol_whitelist file,pipe -f concat -i - -c copy Front.MOV

Helpful options to make youtube processing faster

Uploading to youtube is always a lengthy process, as youtube needs to reprocess the video. They have documented the ideal format characteristics. And I found a blog post describing the ideal youtube paramters in ffmpeg terms, so I didn't have to read the man page again. Yay.

The concatenated videos (above) are just using the 'copy' codecs, so I'll leave them as-is. Only when combining the front & rear video will I apply the new settings:

$ time ffmpeg -i Front.trim.MOV -filter:v "movie=Rear.trim.MOV, scale=480:-1 [inner]; [in][inner] overlay=1370:740 [out]" -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart Combined-FrontAudio.mp4

Also, you might want to do this on a powerful computer. My laptop heat throttled and was converting at 8fps. Copied the video to my server, and got 45fps. (I also had a load average of 22!!!). Your mileage will vary depending on your CPU:

htop screenshot

Note that hardware encoding is not an option here due to the video overlay filter.

Audio from rear camera seems better

While comparing the Front & Rear audio clips, it appeared that the audio from the rear camera might be a bit better. It picked up our talking a little bit less, and picked up the (rear/mid) engine noise a little bit better. Also, the front camera had an intermittant click/hiss/whine in some videos. I haven't figured that out yet.

I wanted to do a side-by-side comparison of front & rear audio, so I needed both output files. Additionally, since the bulk of the conversion time was video, and that isnt' changing, I can just swap out the audio on the video from above.

We'll use -map to grab the first input video (0:v), and the second input audio (1:a):

$ time ffmpeg -i Combined-FrontAudio.mp4 -i Rear.trim.MOV -map 0:v -map 1:a -codec:v copy -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart Combined-RearAudio.mp4

Note that I re-applied the youtube audio codec wizardry.

Watch the video

You can watch the three videos of the Watkins Glen 2017 trip on youtube

Using Metadoctor on HP/Palm WebOS Devices

My Devices

WebOS Devices closed

WebOS Devices open

I decided to update the 3/4 of my Palm WebOS devices. The Pre (not pictured) and Pre2 (middle) were my primary, daily-driver phones for over two years, from September 2009 through to spring 2012, when I acquired a Galaxy Nexus and made the jump to Android.

The Pre3 (right) I also picked up on eBay. It came in box, with all accessories, and a spare battery.

The Pixi (left) I picked up cheap on eBay. "For Parts", because it "Does not move from activation screen". Now, the Pixi was a low-end device when it launched. It doesn't even have WiFi. This isn't a device you'd buy to use. But as a curiosity, I was interested. Due to the Palm servers being taken offline in January 2015, the activation process will never complete. Interestingly, this "Sprint" Pixi has a "Verizon" faceplate, possibly swapped from a Pixi Plus at some point. Model number confirms it is a Sprint device..

Pixi in Activation Loop

The Pre & Pixi are CDMA phones, locked to Bell Mobility and Sprint respectively. So they're effectively useless.

The Pre2 & Pre3 are HSPA+ phones, and I should be able to use them on my current provider, assuming I can locate a SIM adapter.

MetaDoctor

Following the MetaDoctor Wiki is fairly straight forward.

Note:

  • "WebOS Doctor" is Palm's official firmware update program + firmware images

  • "MetaDoctor" is a community-driven makefile that alters the above "WebOS Doctor" images with user-chosen modifications.

Fetching Firmware

After fetching metadoctor from github, you're required to fetch the WebOS firmware images from Palm. Palm, of course, doesn't exist currently. Thankfully, you can get all WebOS Doctor images via archive.org.

You'll have to rename the files after download to append the WebOS version to the filename. In my case:

# Bell Pre
mv webosdoctorp100ewwbellmo.jar webosdoctorp100ewwbellmo-1.4.5.jar 

# Sprint Pixi
mv webosdoctorp200ewwsprint.jar webosdoctorp200ewwsprint-1.4.5.jar 

# Unlocked Pre2
mv webosdoctorp224pre2wr.jar    webosdoctorp224pre2wr-2.2.4.jar 

# AT&T Pre3
mv webosdoctorp224mantaatt.jar  webosdoctorp224mantaatt-2.2.4.jar 

Build Modified Firmware

An important resource is the MetaDoctor README file. It outlines all the options available. Myself, I just want to a few options. Instructions say to modify the Makefile, but passing args works just fine. These make commands will output a few assembled firmware images to the build directory.

$ make DEVICE=pre  CARRIER=bellmo BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 all
$ make DEVICE=pixi CARRIER=sprint BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 all
$ make DEVICE=pre2 CARRIER=wr BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 ADD_EXTRA_CARRIERS=1 all
$ make DEVICE=pre3 CARRIER=att BYPASS_ACTIVATION=1 BYPASS_FIRST_USE_APP=1 ENABLE_DEVELOPER_MODE=1 DISABLE_UPLOAD_DAEMON=1 DISABLE_UPDATE_DAEMON=1 ADD_EXTRA_CARRIERS=1 all

Note that I'm using CARRIER=att for the Pre3, due to this being an AT&T phone. However, ADD_EXTRA_CARRIERS=1 should provide whatever additional data (APN?) for it to work on my network. The Pre2 uses CARRIER=wr. There is no "rogers" carrier, but it is on the wiki. However, it links to the webosdoctorp224pre2*wr*.jar file.

Install Firmware

Interestingly, WebOS Doctor images are more than just a firmware image, like you'd have with Android (or other devices). They're actually executable java bundles, which will push the encased firmware to the device. So you execute it on your computer. On the bright-side, they're Java, so they should work on any platform. On the super-bright-side, they actually work just fine with the OpenJDK installed in Fedora. No need to tarnish your system with Sun/Oracle Java.

$ java -jar build/pixi-p200eww-sprint-1.4.5/webosdoctorp200ewwsprint-1.4.5.jar

Now, the down-side (of course there was one). There seems to be some trouble actually finding the USB device. It seems that it uses platform-specific builds of novacom. Novacom is sort of like both fastboot and adb utilities in the Android world. Apparently there are libusb woes here. There are instructions on chasing down old libraries to resolve this issue...

Luckily, none of that matters on Fedora 24, since novacom is actually packaged.

$ sudo dnf install novacom
Last metadata expiration check: 0:52:44 ago on Wed Sep 14 22:24:59 2016.
Dependencies resolved.
=================================================================================
 Package          Arch     Version                              Repository   Size
=================================================================================
Installing:
 novacom          x86_64   1.1.0-0.11.rc1.git.ff7641193a.fc24   fedora       10 k
 novacom-client   x86_64   1.1.0-0.11.rc1.git.ff7641193a.fc24   fedora       24 k
 novacom-server   x86_64   1.1.0-0.14.rc1.fc24                  fedora       52 k

Transaction Summary
=================================================================================
Install  3 Packages

Total download size: 86 k
Installed size: 131 k

You'll need to run it as root to allow direct access to USB devices (You can probably configure some udev permission rules, but I won't be doing this often enough to bother). You can instruct it to fork, but I preferred opening a second terminal, so I could the process when I'm done.

$ sudo novacomd

Update: A systemd unit is actually installed. sudo systemctl start novacomd would be the correct method to start the service

You should see a few lines of output indicating a device was found.

Back in our metadoctor terminal, run the WebOS Doctor firmware we assembled earlier:

$ java -jar build/pixi-p200eww-sprint-1.4.5/webosdoctorp200ewwsprint-1.4.5.jar

A few clicks of "Next" should get your firmware flowing. Note that one of the 'next' buttons will be disabled if your device isn't found.

WebOS Doctor

Devices stuck in Activation

As mentioned above, the Pixi was stuck in perpetual activation. You'll need to force it into a mode that will accept firmware. This is sort of like bootloader mode on Android. This process should be identical for all Palm WebOS hardware.

  • Turn the device off
  • Hold
  • Either press , or Plug in USB (both turn on the phone)

The updater should see the device and enable you to load the firmware.

After a successful update, the initial boot screen will be a picture of Tux with a Palm Pre.

Pixi Tux

End of Pixi fun

After booting up the Pixi, I noticed it immediately started in "International Roaming" mode, notified of pending voice mail, and has a phone number assigned to it.

I've immediately placed it into airplane mode, then yanked the battery. Since this device lacks Wifi, there is very little reason to look at it again.

WebOS 2 on Pre

There is a process to install WebOS 2 onto WebOS 1 devices, like the Pre and Pixi. I actually used WebOS 2 on my Pre when it was my primary device back in 2011. It added some notable features, such as voice dialing (handy with bluetooth). It was perfectly functional, though a tad sluggish at times (it was targeting better hardware).

That said, backporting WebOS 2 to these devices is an interesting exercise. There is a WebOS 2 upgrade procedure, which involves running the appropriate script from within the metadoctor directory:

$ scripts/meta-bellmo-pre-2.1.0

The script will list the multiple WebOS Doctor images required at the top. You'll need to preemptively fetch these, as the URLs are bad (pointing to offline Palm servers, again). Note that they use a slightly different naming scheme (they do not get the version appended to the end).

The firmware install procedure is the same as above.

WebOS in 2016

Note: Yes, WebOS Technically still exists. However, this article isn't talking about the TV OS version made by LG.

While this article was actually written to discuss getting Palm devices usable, I felt some preamble was necessary.

My Palm History

I've got a soft spot for Palm. My first PDA was the Palm Vx, possibly the greatest PDA ever made. Easily get days of battery from a device that can store all your calendar and contacts information, synchronizing periodically with your master copy on your computer.

Palm lost its way with later versions of PalmOS. Devices got more spec competitive, but batteries didn't. My Tungsten T3 while technically superior in every spec, was actually less capable of performing it's primary PIM tasks than the Vx before it.

We'll just skip over the dark years:

  • Software and Hardware divisions split
  • Software eventually dead-ends in development hell (taking the remains of BeOS with it)
  • Windows Mobile on Palm Hardware

Eventually Palm decided to get their act together.

Hello WebOS

Palm's WebOS showing blew me away. I wanted one so much that when they finally released in Canada, I walked into a Bell store in September 2009 and signed a 3-year contract, despite being laid off work only a week earlier.

The biggest complaint was the Palm Prē launch hardware. While very comparable spec-for-spec with the iPhone 3G and 3GS, WebOS featured multitasking with multiple running applications. Apple didn't. End result was the Prē was often laggy and slow. It was unfortunate.

Palm quickly released an updated Prē Plus in May 2010 with twice the memory. Eventually, the Prē2 (released October 2010) would reveal what a WebOS device should be like. I actually had one shipped up from the United States (since Bell had started converting from CDMA to HSPA+, just in time).

HP's wallet to the rescue

However, Palm was struggling financially. Before the Prē2 was released, HP bought Palm. This was a good thing: Palm will finally have the financial backing to take on Apple and Android platforms.

Looking forward, the Prē3 was on the horizon for 2011. Featuring a major CPU upgrade, a significantly higher-res screen, and a refreshed design. Launching alongside the TouchPad tablet, there were some amazing demos. Tapping content between devices was mind-blowing at the time.

Anticipation was high. Months passed.

Finally, the Prē3 launched in Europe on August 16, 2011. "US is coming up in the near future". Two days later, on August 18, 2011, HP kills Palm.

There was hope that the OS would continue, other hardware vendors might pop-up. Unfortunately, none of that happened (LG smart TVs aside).

Palm Hardware in 2016

Due to the very open nature of WebOS (every device being a development device after a quick, official procedure) there was already a wide community modifying WebOS.

It was due to the efforts of these people that WebOS 2 was backported to the unsupported Prē (including features like voice dialing, etc).

The homebrew scene was particularly active with the "Preware" package manager, and alternative to the official App Catalog. Additionally, the nature of an OS built on HTML and Javascript opened the doors to a wide variety of patches and modifications.

WebOS MetaDoctor allowed users to spin a new, customized WebOS image with pre-applied applications, patches, and features (including massaging hardware support). Then was then flashed to the phone for a customized user experience.

In late 2014, HP announced they were shutting down the authentication servers. Considering a Palm account was a required account for device activation, this had the potential to be the final nail in the WebOS coffin if it wasn't for MetaDoctor allowing you to skip the Palm account step.

WebOS in 2016

Luckily, HP decided to Open WebOS, as well as the Enyo app framework. This allowed others to continue development of the platform.

LuneOS was born, and began modernizing. While this unfortunately means that it can't run on the old Palm hardware, it does run on (slightly) newer Nexus hardware (Galaxy Nexus and Nexus 4).

Followups to come

Stay tuned for articles on both Meta-doctoring Palm hardware, as well as running LuneOS on a Nexus 4.

Snapperd on Fedora with SELinux enabled

Snapper is an excellent utility that provides hourly snapshots of btrfs subvolumes.

Fedora ships with selinux enabled by default. This is excellent, and shouldn't be disabled. To allow this, most software in Fedora has appropriate rules defined, including snapper.

However, snappers rules only allow it to work on / and /home. If you wish to use it to snapshot /mnt/data, or /srv, or any other particular path, you're going to have a very bad time.

While it is certainly possible to define new rules for paths you wish to back up, I decided that in this one particular case, snapper should be allowed free reign.

sudo semanage permissive -a snapperd_t

The above command tells selinux to treat snapperd_t (the context snapperd runs within) as permissive. Rule violations will still be logged, but snapper will be allowed to continue.

ffmpeg part two - Electric Boogaloo

I just attended the Watkins Glen opening day for the second year. It was, again, a blast.

I made some slight adjustments to my ffmpeg assembly procedure from last year.

Dashcam saves video in 5-minute chunks

Instead of creating .list files, I simply used a pipe as input:

for fo in AMBA091*; do echo file "$fo"; done \
    | ffmpeg -f concat -i - -c copy Front-Track1.mov

Front and Rear videos need to be combined

Much like last year, I made short samples to confirm if any offsets needed to be done. However, I decided to move the video to the bottom-right corner to cover the timestamps, since they were incorrect on some videos (well, correct. Just not for this time zone)

Math is basically the same as before for scaling, but instead of a left offset of 70, we want a right offset of 70, but using left-hand coords. Which works out to:

1920 - 480 - 70 = 1370

After the usual synchronization samples were made, it was time to perform the final assembly.

I used a slightly different file layout this time, keeping the front and rear videos separated. I used a loop to assemble them into a combined video:

$ time for FILE in Track{1,2,3}; do ffmpeg -i Front/DCIM/*${FILE}*mov -vf "movie=Rear/DCIM/Rear-${FILE}-cat.mov, scale=480:-1 [inner]; [in][inner] overlay=1370:740 [out]" -strict -2 ${FILE}.mov; done

You're going to want a good CPU. This is the execution time for just under 48 minutes of video on an Intel i5-2520M:

real    172m2.494s
user    619m51.494s
sys     1m37.383s

Final result

You can see the resulting videos on youtube: Part 1, Part 2, and Part 3. Part 3 has some bad sound. I'm not sure why.

Intel GPU Scaling mode

I was attempting to run my laptop at a lower resolution than the laptop panel. However, by default the video is scaled to fill the panel. This causes the image to be distorted (fonts look bad, etc).

On Linux (with Xorg, anyway), this behaviour can be tweaked with xrandr:

$ xrandr --output LVDS1 --set "scaling mode" "Center"

This is not a persistent setting, which is fine for my purposes.

Thanks to the Arch Linux Wiki article on Intel Graphics for documenting this.

My failed experiment with CalDAV/CardDAV

In an ongoing quest to attempt to lessen my Google dependancy, I decided to self-host my Calendar and Contacts using Baïkal.

Installing and configuring Baïkal is sufficiently documented elsewhere. This post is a 9somewhat short) account of why I'm giving up on self-hosted contacts and calendars.

Google

The problems can be summed up into these bullet points:

  • It is assumed (and practically required) to use Google Play Store
  • Google Play Store requires a Google Account
  • Google Account means you have Mail, Calendar, and Contacts

Simply adding your google account into your phone causes Mail, Calendar and Contacts to sync. Mail you can disable, and use an alternate client, as that data is housed internally to the gmail app, and not exposed system wide for other apps to use.

Calendars and Contacts, even if you disable sync, are still "there". Some apps might add events to the "first" calendar, without asking (which may or may not be synced to Google, and not your self-hosted calendar). Updating contacts sometimes adds those updates to the Google Contacts list. There is no apparent way to move items from one contacts/calendar account to another.

Summary

As it stands now, you can either have:

  • self-hosted contacts and calendars with probably most of your data, and understand that you will miss events, and people.

  • Google Contacts and Calendar with all of your data.

As much as I preferred self-hosting, it simply isn't practically possible until you can completely remove Google Contacts/Calendars from your device, and the management apps provide the ability to move events.

AWStats from multiple hosts

I decided I wanted some stats. There are a few options: Use a service (Google Analytics, etc) or parse your logs. Both have pros and cons. This article isn't supposed to help you decide.

I just wanted simple stats based on logs: It's non-intrusive to visitors, doesn't send their browsing habits to third parties (other than what they send themselves), and uses the apache log data I've already got for the entire year.

I'm mainly interested in seeing how many people actually read these articles, as well as what search terms referred them here.

Fix your logs

I've got seven virtualhosts spread across four virtual machines. My first problem, is all were using /var/log/httpd/access_log for logging. A lot of grep work, and I managed to split those out to individual access logs: /var/log/httpd/access_log.chrisirwin.ca, for example.

My biggest problem is a lot of log enteries didn't actually indicate which virtualhost they were from. I ended up spending a few hours coming up with a bunch of rules to identify all queries for my non-main virtualhosts (yay static files). Then I dumped anything that didnt' match those rules into my main virtualhost's log (including all the generic GET / entries.

All my logs are sorted into per-virtualhost logs, and all lines from the original are accounted for.

I renamed access_log to access_log.old, just so I don't mistakenly review it's data again.

Fix your logging

Now that we've got separate access logs, we need to tell our virtualhosts to use them. In each virtualhost I added new CustomLog and ErrorLog definitions, using the domain name of the virtualhost.

CustomLog       "logs/access_log.chrisirwin.ca" combined
ErrorLog        "logs/error_log.chrisirwin.ca"

Then restart httpd

$ sudo systemctl restart httpd

I also disabled logrotate, and un-rotated my logs with zcat. I'll probably need to revisit this in the future, but 1 year worth of logs is only 55MB.

Fetch logs

It goes without saying that awstats needs to be local to the logs. I have four virtual machines. Do I want to manage awstats on all of them? No.

So I wrote a bash script to pull in my logs to a local directory:

$ cat /opt/logs/update-logs 
#!/bin/bash

cd $(dirname $(readlink -f $0))

# Standard apache/httpd hosts
for host in chrisirwin.ca web.chrisirwin.ca; do
    mkdir -p $host
    rsync -avz $host:/var/log/httpd/*log* $host/
done

# Gitlab omnibus package is weird
host=gitlab.chrisirwin.ca
mkdir -p $host
rsync -avz $host:/var/log/gitlab/nginx/*log* $host/

Now I have a log store with a directory per server, and logs per virtualhost within them.

Configure cron + ssh-keys to acquire that data, or run it manually whenever.

Install awstats

Then I picked my internal web host, and installed awstats. This is in Fedora 22, but requires you to enable epel for CentOS/RHEL.

$ sudo dnf install awstats

And, uh, restart apache again

$ sudo systemctl restart httpd

Configure awstats

Now go to /etc/awstats, and make a copy of the config for each domain:

$ sudo cp awstats.model.conf awstats.chrisirwin.ca.conf

You'll probably want to read through all the options, but here's all the values I modified:

LogFile="/opt/logs/chrisirwin.ca/access_log.chrisirwin.ca"
SiteDomain="chrisirwin.ca"
HostAliases="REGEX[^.*chrisirwin\.ca$]"
# DNSLookups is going to make log parsing take a *very* long time.
DNSLookup=1
# My site is entirely https, so tell awstats that
UseHTTPSLinkForUrl="/"

Run the load script

Let's just piggy-back on provided functionality:

$ time sudo /etc/cron.hourly/awstats

Mine took >15 minutes. I think it was primarily DNS related.

Review your logs

By default, awstats figures out what config to use based on the domain name in the URL. However, I've aggregated my logs to a single location. Luckily, awstats developers though of this, and you can pass along a an alternate config in the url:

https://internal.chrisirwin.ca/awstats/awstats.pl?config=chrisirwin.ca

Tweaks to awconfig

Unless you're running awstats on your localhost, you'll be denied access. You'll likely have to edit /etc/httpd/conf.d/awstats.conf and add Require ip 10.10.10.10/16, or whatever your local ip range is. Note that while you can add hostnames instead of IPs, reverse DNS needs to be configured.

While there, you could also add DirectoryIndex awstats.pl.

This blog is powered by ikiwiki.