Friday, December 26, 2014

Merry Christmas, Beef Wellington

We celebrated Christmas with a true feast: Beef Wellington.

BLUF: It was a success.

There are things we did different from the inspiration, there are things we would do differently the next time, but in short, a delicious success.

The recipe is here including pictures.

Tuesday, November 11, 2014

Well, no it wasn't

As simple as zypper dup?

Not.

But not that painful.

The first effort left us with a corrupted yast2 (although yast worked) and no network connectivity.

Using a Windoze machine I downloaded the installation ISO and burned it to a USB flash drive. I used this to upgrade over the previous zypper dup effort. All the repositories had been converted, so it was relatively easy, although it wiped the startup splash screen and still no network. And AutoKey was hosed for a dependency conflict issue.

But it fixed the video and yast2 issues.

Searching for answers I stumbled across

     doc.opensuse.org/release-notes/x86_64/openSUSE/13.2/#upgrade:

It seems they've changed the network infrastructure from inetd to something called "wicked".

That doesn't work on wifi laptops. 

Sigh.

They tell you how to nuke wicked, but it would not work because we didn't have NetworkManager installed.

So we used the ISO once again to install NetworkManager, and followed the rules in the link above to nuke wicked.

That fixed networking, so I now updated everything, which then fixed AutoKey.

So all is copacetic.

Sort of:

The default video is still to big and the fonts too big, and I miss the splash screen hiding all the bootup trash.

But everything else seems to work.

Tomorrow is another day.

Monday, November 10, 2014

Reinstalling the operating system

BLUF: It's easy when you know how.

There are some times that try men's souls, and some are when you have to change your computer architecture or operation system (OS). 

This is referred to as migration.

Now, imagine the trials associated with doing both on two integrated computers, all at the same time.

Yikes!

This is how it went:

There is really only one computer (Hewlett Packard tm2t Touchscreen) in this case, with 8 Gigabytes (GB) of Random Access Memory (RAM) and a 500GB Hard Drive (HD) that has been divided ("partitioned") into two parts:

100GB "root" partition (called "/")
400GB "data" partition (called "/data")

A normal computer system comprises two basic physical elements: the physical Hard Drive (HD) and a set of memory chips for Random Access Memory (RAM).

Typically the HD has hundreds of "zerabytes" while the RAM only has 5-10 zerabytes.

(Here, zera is a placeholder for Kilo, Mega, Giga, Peta.... I can't find that it actually is assigned any value, but your mileage may vary. At present we are talking zera = GIGA: One Gigabyte (GB) =109 bytes.)

The HD may be either an actual physical spinning platter of some sort or a virtual drive comprising zillions of  chips on a super chip.

The RAM is most decidedly an array of chips on a board.

So in today's terms, a system might comprise 500 GB of HD space and 8 GB of RAM. The HD is used for storing information while the RAM is used for executing applications that use the stored information.

But then it gets more complicated, as the HD can be logically separated into several “partitions”, each with its own file structure and contents. Deciding on which directories to put in how many partitions is very much an art, not a science.

The basic idea is to allow disposable or easily reproducible information to reside in one partition and the more precious irreplaceable information to be barricaded somewhere else, so that if the first fails or is corrupted then the latter is protected.

In this representative system:

We have one partition (the "/" or "root" partition) containing all the elements associated with the operating system (OS): basic OS, the application files, hardware driver files, and so forth. Especially, files containing the user preferences, or “Settings”.

Then we have a second ("/data") partition that contains all the personal data, documents, and other information we want to keep safe. This drive has been encrypted with yet another password.

Then there is a bit of hybrid data, such as Settings, that need to live in the / partition to be read by the executing applications but are something of a pain to set up and a pain to lose if you have to reinstall. In this case we:

  • Build them in the / partition
  • Move them to the /data partion and then
  • Provide a pointer (called a symbolic link) in the / partition that points to their location in the /data partition.

The idea is to install the operating system and all the stuff you might want to change or upgrade in the / partition while keeping personal data safe from exploitation or corruption in the encrypted /data partition.

That way, when you upgrade the system you can reformat the / partition and install the new versions there without losing all your personal data:

You'll have to reconstruct the symbolic links, but that is a relatively trivial (easy when you know how) task if you've kept records of your links. See PPPPPP below...

So, having taken the time and discipline to record the applications, links, and repositories (more on those later) that you are using, it is an easy matter to install a new or reinstall the former OS. You simply reformat the first partition, replacing all its contents, reinstall the applications from your saved sources or updated repositories, and then reconstruct the symbolic links to settings.

Seldom more than perhaps an hour or so of work.

OK, so the first step was to migrate the basic machine from a 32-bit architecture to a 64-bit architecture.

What does this mean?

Architecture refers to the underlying physical structure of the computer. This includes the dynamic Random Access Memory (RAM) chips, where the moment-to-moment execution of program steps takes place, the more static Hard Drive (HD) storage where programs and data are stored (but not executed), and the overall structure of all of these.

Perhaps the most fundamental issue is “How much stuff can you usefully save?”

This depends not only on how big the box might be (RAM and HD size) but also whether you can find it once it is in the box.

Finding stuff on the HD is easy, you simply specify a path for all the directories and subdirectories to the file you seek.

But RAM has specific locations ("addresses"): different sets of transistors in the array of transistors where information starts and stops for a particular record. So you have to know how many addresses you can keep track of and read.

This, in turn, depends on how big your addresses are.

These addresses are defined by a certain number of bits: single states of a group of two-state (binary: on-off) transistors. So dig out your statistics book and calculator:

If you have x bits (transistors) per address you can save 2x different combinations of x bits, assuming you have enough RAM.
If you have 32-bit addresses then you can address 4,294,967,296 different (or about 4.3 GB) locations of memory.

But these days, RAM comes in tens of GB, so we need something else. By moving to a 64-bit architecture we can now address ~1.8 x 1019 (18 followed by eighteen digits, actually 18,446,744,073,709,551,616) different addresses, which indeed is a very big number.

Which might be a good thing to do. 

So we proceed with this:


  • Download the new operating system as an image of a DVD and burn it to a new DVD-R disk
  • Write and save a record in /data that summarizes:
    • All the programs we needed to reinstall after installing the new OS 
    • All the links we needed to reconstruct
    • Network settings and passwords
    • And, importantly, all the repositories that were being used
               In short, exercised the axiom: Prior planning prevents {...} poor performance (PPPPPP)...


  • Put the disk in the DVD drive and reboot the system
  • Make sure it is running well, and, well...
  • Go to the gym.
When we came back the new 64-bit OS was running, just fine.

Next, as expected, we reinstalled the programs and symbolic links from our record in /data.

Reinstalling the programs requires downloading them from the distribution's repositories.

OK, so what is all this blather about repositories?

Repositories are simply that: a place where piles of stuff are being saved. In computer terms they are internet locations where developers upload and "serve" the latest versions of the various applications and other files. And there are zillions of them. 

The point is that with repositories you have some semblance of assurance that the programs offered ("served") there have had some sort of oversight and review so that they are neither corrupted nor malware-infected. Not a guarantee, but a stronger assurance. Furthermore, the programs served there have more likely been checked for the other files that are needed for them to run (called "dependencies") so your hassle may be minimized in loading the applications from the repository.

Look: there are some people who like a top-down Bottom-Line-Up-Front approach (guess who) and some who like to work at perfecting the finest details of an object. Thanks be to God for both. The repositories are the domain of the latter. In Free and Open Source Software (FOSS) there are literally thousands, if not millions, of people around the world working through an elaborate cooperative scheme to develop, track, debug, and serve thousands if not millions of different applications, at least for Linux-based systems. These people fix bugs, find corruptions, and generally make the world a better and safer place.

For free. Because they want to and can.

So repositories are your friend. Download software from "official" repositories  whenever possible.

Configuring repositories is yet another art, rather than a science. They are a complex, albeit straightforward, functional decomposition of applications by categories. So if you are interested in a particular type of software, e.g., Education, then there is probably a repository that contains only applications associated with Education.

Topic for another time, but (remember the six Ps) if you record the repositories in the old installation you can then (at least for openSUSE) just change the version:

     http://download.opensuse.org/repositories/filesystems/openSUSE_13.1/

simply becomes

     http://download.opensuse.org/repositories/filesystems/openSUSE_13.2/

And so forth for the remaining (23 in my case) different repositories...

=====

So, after all that we were done.

Well, Sort of. It's a bit more complicated.

You see, we're running yet another machine in software inside of this machine. This is called Virtualization:

Virtualization is a technique that allows you to embed one computer inside of another.

A virtualization application, such as Oracle VirtualBox, VMWare, or several others, creates an entirely new computer in software; hence the term “virtual machine” or VM. 

The virtualization application (e.g., VirtualBox) resides in the first (/ or root) partition and is executed in RAM. It reads and operates on the virtual machine (VM), which is a rather huge single file that resides in the /data partition of the parent (host) machine and is referred to as the guest machine.

The guest in turn contains one or more (virtual) HD partitions of its own, each with its own set of an operating system, applications, and user data which can be and usually are completely different from those of the host system. And it has its own (virtual) RAM, necessarily smaller than that of the host, since it has to use the host RAM resources.

This process of virtualizing a machine within a machine within yet another machine... can continue as long as time (and actual hard disk and RAM space) exist..

(I'm reminded of the broom in the Sorcerer's Apprentice... smart man, Walt Disney...)

So, upgrading all this is not a trivial matter: 

You can choose to upgrade the host system without affecting the embedded (guest) system.
You can choose to upgrade the guest without affecting the parent host system.
But there are physical limitations to HD and RAM space, so if you want to upgrade both, then there is a definite cascade of steps that you must take in order to succeed:

First upgrade the top level host system:

The new main operating system (in this case openSUSE 13.1 ) is a 64-bit system, while the older system was 32-bit. So we could not use the normal zypper distribution upgrade process,   which simply requires renaming the repositories and issuing a single command:
     zypper dup

Instead, it required a "clean" install: reformatting the / partition, reinstalling all the applications, and reconstituting the symbolic links. Not a huge effort but, as noted, still more effort than
     zypper dup.

Next upgrade the first level guest system
The Windows migration in the virtual machine was from 32-bit Windows XP to 64-bit Windows 7. This required a bit more effort:

  • First we had to increase RAM in the VM from 1GB to 2GB. Merely a click of a button once we knew how, but finding out how took a few hours. First we had to take the machine apart to confirm that there was enough physical RAM present, since free only showed 2GB. It turns out that we were using a 32-bit system without PAE that, for the reasons discussed above, cannot see more than 4 GB.  The VM assumed the host only had 4 GB and would not allow increasing VM RAM beyond 1 GB.

This was solved by installing a PAE kernel and trying again (push the car back up the hill... :-(


  • Next we had to increase the virtual HD from 20 GB to 40 GB, according to Microsoft's system requirements page. But hello, that required a partitioning tool. Progressive and straight-forward, once you know how, but more time gone, downloading, mounting, and farkling about. You have to first increase the size that VirtualBox allows in the host, then actually go into the guest operating system and increase the partition there as well.
  • Then we cloned the system to ensure we still had a system if all else failed (see PPPPP)
  • This caused a conflict since the new drive had the same UUID as the old drive (duh, look up clone - how do you spell "genetically identical") so there was some more messing about to figure out how to change the UUID.

But eventually we were ready and followed the excellent instructions at

     http://winsupersite.com/article/windows-7/clean-install-windows-7-with-upgrade-media-128512

to succeed.

And of course there are always some dirty little secrets that you have to figure out...

=====

Well that was then (a year ago) and this is now. Now I want to move from 64-bit openSUSE 13.1 to 64-bit openSUSE 13.2 without changing the contained VM.

According to this account that should simply be a matter of renaming the repositories in yast2 sw-single from 13.1 to 13.2, and then issue the command zypper dup from the root command line.

Can it really be that simple? Well, not quite that simple, but simple enough.

Watch this space:

     zypper repos --uri
     cp -Rv /etc/zypp/repos.d /etc/zypp/repos.d.Old
     sed -i 's/13\.1/13.2/g' /etc/zypp/repos.d/*
     zypper ref
          The following failed, so we chose ignore and then removed them from the repo list:
               file systems
               DarkSS
               Qt
          All the rest worked fine.

So now holding breath: There are dire warnings of consequences, so I'll close everything down, log out to a new root session, and issue the fatal command:
     zypper dup --download "in-advance"


Friday, November 7, 2014

Très cool: Convert .ics to .csv

One more step towards freedom.

This gives me an ICS ==> CSV translation that renders

EVENT     STARTDATE     STARTTIME     ENDDATE     ENDTIME     COMMENT
in chronological order.

=====

I know about logging programs, billable time gigs, and the lot. There are lots of them out there, you can Google for them.

But I wanted something that would just simply log what I do. And, if I decide to take a nap, record idle time, and in the morning tell me what time I really went to sleep.

Simple stuff.

Not.

The only thing I 've found like that is a wonderful app for Windows, but I'm not on Windows. I've given him a salute there, but this is here: Linux.

Eventually I found KTimeTracker. It is indeed very cool, collects what you do by the window title, when you started, when you stopped. And has a bunch of reports on how much time you spent on this or that.

But it doesn't give you the times...

It does have a File > Edit History feature that shows exactly what I want, but there is no way to show it or export it.

KTimeTracker does export an .ics file. I had written a BASH script to restart it with a new .ics file. The script is called time_new, which I run every day when I start work. This starts a new .ics file and calls a routine called time_awk.

Now the rubber is starting to meet the road.

I had written this some while ago to parse the .ics file into single line log and event rows, but it was still a mess.

So, then doing what I knew how to do, I delved into writing some Windows Visual Basic macros to further refine it into that which I wanted, that of KTimeTracker's History screen, that could then be saved as an Excel spreadsheet.

But this is tedious. First, I almost never use Windows except for this, then there is the bloatware of starting up Windows 7 (under VirtualBox in my case), and then the actual parsing takes forever under VirtualBox. After the macro runs you get the hourglass forever unless you switch out of the guest to the host Linux platform and back. I haven't taken the time to try to figure out that one.

What a PITA.

So I finally bit the bullet. It took me a couple of days, but I have now constructed an AWK file that does exactly what I want: 
  • Translate an ICS file exported by KTimeTracker into a CVS file that can be opened in LibreOffice to show:
EVENT     STARTDATE     STARTTIME     ENDDATE     ENDTIME     COMMENT

in chronological order.

In the process, I've learned a lot more about AWK, SED, and the interactions with the various shells: SH, BASH, CSH, etc.

And yes, Virginia, there is a difference. The most maddening is in the ability (or not) to assign shell variables values from within AWK.

I won't take the time now to explore all that, just give you the answer. It is well commented, but as always, if you don't know what you're doing you might want to wait until you do.

In the meantime, all you gotta do is point this at your .ics file and the result will be a CSV-formatted .xls file with the same name, as in:

     time_awk 141105_andy.ics

that results in

     141105_andy.xls

Easy when you know how.

Enjoy. 

=====

Here it is, under the standard GPL:

#! /bin/bash
#
# License: LGPL v3+ (see the file LICENSE)
# (c)2001-2014 C. Andrews Lavarre
# email : alavarre@gmail.com
#
########################################################################
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 3 of the License, or    #
# (at your option) any later version.                                  #
#                                                                      #
# This program is distributed in the hope that it will be useful,      #
# but WITHOUT ANY WARRANTY; without even the implied warranty of       #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the        #
# GNU General Public License for more details.                         #
#                                                                      #
# The GNU General Public License is posted at                          #
#    http://www.gnu.org/licenses/gpl.txt                               #
# You may also write to the Free Software Foundation, Inc.,            #
# 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA             #
########################################################################

# This routine was modified 141106 by C. A. Lavarre (Andy).
# It invokes AWK to parse an ICS file
# Arguments:
# $1 the path (optional) and filename of the ics
# e.g., /data/info/korganizer/archive/andy/130112_andy.ics
# Usage
# time_awk $1
# Save the input
target=$1
# Check the input
if [ "$target" == "" ]; then
#Complain and offer options
echo "Usage:"
echo "time_awk workingdirectoryandicsfilename"
echo "e.g., /data/info/korganizer/archive/andy/130112_andy.ics"
echo "Quitting"
exit 0
# Finish
fi
# Get the word and its length
source=$target
length=${#source}
# Get the file
myfile=${source##*/}
mybase=${myfile%.*}
mydir=$(dirname $source)
# If blank then
if [ "$mydir" == "." ]; then
# Set the working directory and basename
mydir="/data/info/korganizer/archive/andy"
# Finish
fi
mydir=$mydir"/"
# Show resultsmyfile
target=$mydir$myfile
# Name the new .xls file
output=$mydir$mybase".xls"
    cd $basedir
    echo "Saving "$target" awk results to "$output
    echo "Continue? (y|n):"
    read user
    if [ "$user" == "n" ]; then
# Quit
echo "cd to the correct directory, follow the usage."
echo "Quitting"
exit 0
# Finish
fi
# Build the file
# Usage: awk [POSIX or GNU style options] [--] 'program' file ...
# awk -F 'all the instructions' $target
# the -F : option declares the colon as a field separator
# Between DTSTART and DTEND {do THIS} (^ means look at start of line)
# Then after SUMMARY {do THAT} 
# and after COMMENT {do THE OTHER}
# and after END:VEVENT {FINISH}
#where
# THIS is 
# {printf "%s," $2} which instructs to
# print the second field $2 as a string
# THAT is
# {printf "\"%s\",", $2} which instructs to
# print the second field $2 as a string in double quotes followed by an @ sign as the delimiter
# THE OTHER is
# {c=$2} which instructs to
# set the variable c to (copy) the value of the second field
# FINISH is
# {print c; c=""} which instructs to
# Print the COMMENT and clear the copy variable
# then pipe the whole lot to the tr command:
# Usage: tr [OPTION]... SET1 [SET2]
#Translate, squeeze, and/or delete characters from standard input,
# tr -d '\015' deletes character with octal value 015 (CR)
# Pipe it to awk to strip off the first line
# Pipe it to sed
# sed 's/,//g' removes all commas 
# Pipe it to awk to remove lines with only two fields
# Pipe it to Sort it by columns #2 and #3 
# then repeat this for all lines 
# and send the lot to $output
awk -F :\
'/^DTSTART/||/^DTEND/ {printf "%s@", $2}\
/^SUMMARY/ {printf "\"%s\"@", $2}\
/^COMMENT/{c=$2}\
/END:VEVENT/ {print c; c=""}' $target\
| tr -d '\015'\
| awk 'NR>1'\
| sed 's/,//g'\
| awk -F @ '$3!=""'\
| sort -t "@" -k2\
> $output
chmod +x $output
# Create alternate work files
output1=$output"1"
output2=$output"2"
  # Pipe it to awk to 
# Move field $3 to $4
# and send the lot to $output1
awk -F @\
'{$(NF)=$3;}1' OFS="," $output > $output1
# Split field #2 on the T delimiter into $2 and $3:
# Split field $4 on the T delimiter into $4 and $5
# and send the lot to the file $output2
awk -F ,\
'\
{split($2, a, "T"); $3 = a[2]; $2 = a[1];}\
{split($4, a, "T"); $5 = a[2]; $4 = a[1];}\
{ print $0;}
' OFS="," $output1 > $output2
# Convert columns $2 and $4  in YYYYMMDD to MM/DD/YYYY
# Convert columns $3 and $5 to HH:MM:SS
# and send the lot to $output
awk -F ,\
'\
{$2 = substr($2,5,2)"/"substr($2,7,2)"/"substr($2,1,4)}\
{$3 = substr($3,1,2)":"substr($3,3,2)":"substr($3,5,2)}\
{$4 = substr($4,5,2)"/"substr($4,7,2)"/"substr($4,1,4)}\
{$5 = substr($5,1,2)":"substr($5,3,2)":"substr($5,5,2)}\
{print $0;}
' OFS="," $output2 > $output
# Delete the temporary files
rm $output1
rm $output2
# Report

echo "Parsing complete."

Monday, October 13, 2014

Importing CSV into GnuCash

So we have experimented scientifically and carefully and can confirm and replicate the steps below to import CSV files into GnuCash 2.6.4:

1. Open the CSV file (e.g., exported from GnuCash) in an editor (e.g., LibreOffice Calc):

    a. Edit the file to change date column format from MM/DD/YYYY to m-d-yy

    b. If there are split transactions
        i. Add the parent transaction date to each of the splits
        ii. Add the parent transaction account to each of the splits
        iii. Delete the parent summary line
        iv. Delete the parent split line
        v. Select the number columns and delete all minus signs
        vi. save and close.
        
2. In GnuCash:
    a. Import Transactions from CSV

    b. Keep the Data Type as Separated
        i. If it does appear as columns then examine the Separators
            to find the right one (Comma, Tab, Semicolon, etc.)

    c. Change the Date Format to m-d-y

    d. Change the Currency Format to Period

    e. In the None |None |None |None |... row:
        i. Rename the Date, Num, Account, Description, To Num, From Num
            columns to
                Date, Num, Account, Description, Deposit, Withdrawal

    f. If you see headers in the presented view then change 
        i. Start import on row
            to whatever causes the headers to be highlighted in pink,
            e.g., 2

    g. Change 
        i. stop row on
            to whatever clears any data from being highlighted in pink,
            e.g., 4

    h. Click Forward

    i. The Match Transactions screen appears:
        i. It has correctly read the Category column and creates 
            an Info column entry that states
            New, transfer $(xxx) to (auto)"*Category Name*"

    j. Click Apply. It reports success. Click Close.

    k. If it is a new transaction it appears. If it is a duplicate it simply overwrites the original

Discussion

I was on GnuCash 2.4.13 under Linux openSUSE 13.1. This is the default version offered by their repositories. However, in yast2 sw_software you can search for gnucash and open the Versions tab to select the current 2.6.4 offering:

2.6.4-63.1 x86_64

You have to do this also for gnucash-lang to avoid conflicts.

That works. So now we are on 2.6.4.

This offers File → Export → Transactions to CSV

That works just fine.

Importing, however, is a bit more daunting. 

1. First you clear out all the unwanted columns

It appears that the GnuCash CSV import routine requires exactly one line per transaction split, and NO HEADERS, NO TRANSACTION SUMMARY line, and a funky DATE FORMAT.

So you have to load the exported CSV file into an application, e.g., LibreOffice, and farkle with it a bit. ESPECIALLY if it was exported from GnuCash.

2. Regarding the date format: 

It appears that the routine requires m-d-y. But neither Windows 7 nor LibreOffice have that explicitly. You can enter m-d-y as a custom date format in LibreOffice, but 09/04/14  is rendered 9-4-y

So you must use m-d-yy in LibreOffice. 

3. Remove unwanted rows:

It appears that the GnuCash import routine expects one and only line for each transaction and really only requires 5 fields to work. 

But if the CSV was exported from GnuCash you will have at least three lines with fifteen fields:
    1. A summary line for the transaction with Account and fifteen other fields.
    2. Another line for each destination account but no date or account or description fields
    3. Another line for the parent account transaction split again without date or account or description fields.
    
    You must remove #1 and #3.

Otherwise, a piece of cake. 

Easy when you know how.

Sunday, October 5, 2014

Solved: Chromium Aw Snap problem under Linux openSUSE 13.1

Just a quick post for google fodder:

I'm running under Linux openSUSE 13.1 with Chromium Version 37.0.2062.94 (290621) (64-bit)

I started getting "Aw Snap" errors in the middle of September 2014. Tried all the usual fixes (disabling, enabling extensions, deleting the profile, etc.)

The error message is 


[23335.702221] Media[1252]: segfault at 0 ip 00007f940df42700 sp 00007f940a73e3e8 error 4 in libffmpegsumo.so[7f940ddb3000+23a000]

Checking in yast2 sw_single I see there are two ffmeg modules:

     chromium-ffmpeg
     chromium-ffmpegsumo

The second only appears when you disable the first.

So I did, and the problem disappears. Now when I check yast2 sw_single the first has disappeared.

So it would appear that an upgrade to the ffmpeg package broke chromium, deleting the older version fixed it.

Easy when you know how.

Tuesday, June 24, 2014

A Work in Progress: OwnCloud and CALDAV

Very very cool.

OwnCloud and Caldav Sync Free Beta, that is.

I wanted to get away from big data. This link gave me the inspiration. So I've installed OwnCloud

Avishek Kumar does an excellent job of providing a straight-forward overview of the installation process:
     http://www.tecmint.com/install-owncloud-to-create-personal-storage-in-linux/

So now it works at http://localhost/owncloud/index.php/ but I want sync.

Well first we explored import/export of .vcf and .ics files. This is an interesting exercise, and I've learned a lot. In particular, 

     http://forum.owncloud.org/viewtopic.php?f=3&t=3067

is an excellent exposition on automating the download of .vcf and .ics files from OwnCloud. 

I worked these into a BASH script and quite happily now have the requisite files. And Evolution quite happily imports them.

But what about the 'droid?

Well, it turns out that there really is no happy way to import .ics files into the Android calendar.At least I couldn't find one.

So bite the bullet and start learning yet another something new: DAV.

Simply stated, in the beginning there was WebDAV, and then CardDAV and CalDAV. All are protocols for synchronizing data.

Well, it turns out that Konqueror (and a lot of other file managers) already have webdav support built in. So the first step (synchronizing, exchanging files) is a piece of cake. Enter the following (obviously changing the path to your OwnCloud's URL) in the address field of the file manager:

     webdav://localhost/owncloud/remote.php/webdav

This will prompt y     ou for your OwnCloud username and password, and then present the OwnCloud file tree.

Easy when you know how.

Next: Synchronize Evolution:

But this is problematic. All our attempts fail.

On the other hand, the Android is a delight:

     https://play.google.com/store/apps/details?id=org.gege.caldavsyncadapter

just works.

It's a little tricky, since it does not appear in the normal App Store. Go to 

     Settings →Accounts

and choose Add Account. Choose CalDav Sync Adapter and add the required information, e.g., 

     User:               andy
     Password:     **********
     URL:
               http://localhost/owncloud/remote.php/caldav/calendars/andy/defaultcalendar

For Calendar:
You get the URL from the OwnCloud screen. Click the gear icon in the upper right corner. There are a number of icons. The second one is the link. Click on it and read the URL in the popup dialog. 

(There are also a number of greyed-out sub-icons, including a download arrow. Click it. it will download the current calendar as an ics, that then happily imports into Evolution.)

For Contacts there is no such gear icon. The URL is in the form:

               http://localhost/owncloud/remote.php/carddav/addressbooks/andy/default

But the calendar works great, so that is enough for now.

Easy When You Know How: Deskewing text scans under Linux

http://galfar.vevb.net/wp/projects/deskew/

I've been scanning documents for ages, but more recently than before. Especially in large numbers. 

So I use the ADF (Automatic Document Feeder) on my HP-6400 scanner with XSane to direct the images to a Download directory.

But inevitably they are off by a fraction, no more than a fraction, like 0.5°, but my eye still catches it. And deems it unprofessional.

But if you have fifteen pages, opening each one to correct them is a royal PITA. And time consuming to boot.

So I cast about for ways to fix it: 
  • Looked for GIMP plugins. There was one once upon a time called Deskew but it seems to have evanesced.
  • Looked for options under Imagick. Its convert and mogrify scripts are very capable. There is a deskew option, but I did not find any simple description of how to use it.
  • Looked for other options...
And found the cited program deskew.  It is current, and has versions for Linux (both 32- and 64-bit), Mac, and Windows. You can recompile it if you really want to, but it comes with a Bin subdirectory containing precompiled versions for all three OS.

And it is a positive dream to use. Most of the parameters are precompiled with defaults. I changed the binary RRGGBB background (-b) parameter to white (fffff) from its default black (000000), but frankly haven't had the time to examine whether that was really necessary.

So the command at its simplest for me was

     deskew -o output.jpg -b ffffff input.jpg

Yes, you have to know what you're doing: unzip the download, move the entire resulting Deskew directory to somewhere useful, e.g., /data/graphics You also need to 

     chmod +x deskew

to make it executable and adjust your .bashrc to include that path, or simply prepend it to the deskew command:

     /data/graphics/Deskew/Bin/deskew ...

So, generally, a breeze.

A very elegant piece of work.

Well done. 

Sunday, May 18, 2014

Easy When You Know How: Garmin .tcx download and conversion under Linux

My quest to free myself from Windows continues, successfully.

There is very very little left for which I need Windows. One thing (until now) has been to download exercise data from the Garmin Forerunner 405 watch.

The quest began with Google, of course, which revealed Gant.

Gant is a program that has faded from sight. garmin-ant-downloader (GAD) is a fork.

Both purport to link to Garmin devices, e.g., the Forerunner 405 watch.

I tried both the 32- and 64-bit versions of GAD and could not get it to work. It threw segmentation faults.

http://www.jamesarbrown.com/?p=5 provides a link to a version of gant that works:

     https://github.com/jamesarbrown/Gant

You can download a zip of the source files that also contains the fully compiled gant program. It also has a copy of the binary auth405 file.

And it just works:

1. Unzip the zip file to a location you like, e.g., /usr/local/bin/holdings/fitness/gant.

2. Go there in a terminal as root, make gant executable, and change the permissions to user:
 
      # cd /usr/local/bin/holdings/fitness/gant
     # chmod +x gant     # chown -R andy:users *

     a. You may need some additional files if it doesn't work initially, but it worked just fine under openSUSE 13.1.

3. Set the watch to pairing and listening mode as discussed at
http://community.linuxmint.com/tutorial/view/818:
Turn on pairing:
     Settings → ANT+ → Computer → Pairing: On

Tell it to retransmit recent data
     Settings → ANT+ → Computer → Force Send: Yes
And enable pairing
     Settings → ANT+ → Computer → Enabled: Yes

4. Plug the ANT+ USB dongle into a USB port on your computer. The ANT+ USB dongle will be recognized by the operating system and assigned to device ttyUSB0. (You can run dmesg to see this result.)

5. In a terminal session as root, change to the directory where you have installed gant, then pair the watch to the computer:
     # cd /usr/local/bin/holdings/fitness/gant
     # ./gant -f garmin -a auth405
     The -f switch assigns the name garmin to the pairing agreement. The auth405 file is the secret key used to pair with the 405. The program has to be run as root to be able to access the ttyUSB0 device.
     The watch will ask if you want it to pair. Press the upper right (Enter) button to say yes. This pairs the watch to gant, giving the watch the name garmin and writing the pairing key to auth405 in the current directory.

6. As root download the latest activities:
     # ./gant -nza auth405 >output.txt
     Note: output.txt is only the session.log. There will not be any other output unless you have actual training data on the watch.
     The routine will present each activity as YYYY-MM-DD-HHMMSS.TCX if you have actual activities on the watch. The routine always outputs output.txt as a session log.

7. Now, what to do with it?

https://sites.google.com/site/garmin405linux/ details how to upload the output.txt to Garmin Connect.

http://developer.garmin.com/schemas/tcx/v2/ discusses the TCX format.

https://forums.garmin.com/showthread.php?22222-Convert-Garmin-XML-(TCX)-to-CSV-and-analyze-with-Excel

     presents a routine to convert TCX to a comma-separated-variable (.CSV) format that can be read into a spreadsheet. But it is written in perl for Windows and throws an error at line 65. I've checked the perl dependencies and so forth, but nothing jumps out to resolve the issue, so too hard for now.

On the other hand,
     http://www.teambikeolympo.it/TCXConverter/TeamBikeOlympo_-_TCX_Converter/TCX_Converter_ENG.html 

(mind the line break) offers a very smooth app to read TCX files.

     It has a comprehensive manual at http://www.teambikeolympo.it/TCXConverter/TeamBikeOlympo_-_TCX_Converter/DOWNLOADS_files/TCX-Converter-UserManual_209.zip

     The only glitch I see is that it only recognizes .tcx files if the suffix is lower case (.tcx not .TCX). 


     But otherwise, quite robust. For example, it gives the following for each track:
     Time
     Lat     Long     Altitude     Distance     Heart Rate (BPM)     Cadence

     It also gives a host of export formats, including CSV.

Yet another step away from Windows. I'll be golden if I can find a way to graph heart rate and show tracks on a map.