Pages

23 September, 2014

Modern indentation for old LaTeX document classes

In my last papers I always manually removed the indentation of the first paragraph of each section. This indentation style is considered more modern and more aesthetic, compared to just indenting every paragraph independently of its placement in the document.

Many LaTeX book/article/conference templates, esp. those from the last century ;-), still use old-school indentation. And since I am finally starting to put my PhD thesis work into a real thesis document I was looking for a way to automate the cumbersome manual work I had done previously.

Here is my current solution (based on this TeX answer):

\usepackage{etoolbox}
\makeatletter
\patchcmd{\@startsection}
  {\@afterindenttrue}
  {\@afterindentfalse}
  {}{}
\patchcmd{\chapter}
  {\@afterindenttrue}
  {\@afterindentfalse}
  {}{}
\makeatother

This work nicely for the AMS book template. If you use LyX, as I do, then just add this to your LaTeX preamble in the document settings.


17 September, 2014

Pushing uncompressed and gzip-compressed files into awk

I was just wondering how to best handle compressed and uncompressed files together in my CSV-processing bash/awk scripts. Here is a very simple solution:

#!/usr/bin/env bash

file=my-data.csv.gz

kat(){
  case $file in
    *.gz) gzip -dc $file;;
    *)    cat      $file;;
  esac
}


kat $file | awk '{/* do something */}'

I would say this is really lean code. Even if bash has its restrictions and I despised it in the past, I
have come to love it over the past years.
 
PS: I know there are more sophisticated tools, such as `file` and `zcat`. However, they are not available in the default GitBash on Windows, which I am still using at work for many tasks.


12 February, 2014

Using Gimp Images (xcf files) directly inside LyX

Long ago, I was tired manually converting images to PDF, for papers etc. And I have been using SVGs directly inside LyX, which works quite nice. For my current work I also need some pixel images and composed them in Gimp. Gimp puts out XCF images that contain all layer information, etc.

Unfortunately, LyX (2.0.7) cannot fully handles these images out of the box. The preview is fine, but when generating PDFs LyX/LaTeX fails. To fix this you need to do add following settings in LyX:

1. Preferences -> File Handling -> File Formats -> [New] ->
Format: XCF
Short Name: Gimp Image
Extension: xcf

2. Preferences-> Converters:
From: XCF, To: PNG
Converter: convert -layers merge $$i $$o

This will flatten the file, so make sure your current layer arrangement, etc. looks like your desired final out put. Note that there is  no need for an additional XCF to PDF setting, since the internal PNG to PDF converters will be for that.

Cheers,
Juve


23 October, 2013

Adding JSON quotes with Vim

I just had some bad JSON and quickly replaced the bad text with good text using my magic Vim editor:

The bad code looked like:
{
    prop: "value"
}

But JSON requires:
{
    "prop": "value"
}

Here is the command I used in Vim:

%s#\([^"\t ][a-zA-Z0-9_-]*\):\([\t ]\+\)#"\1":\2#gc

Ciao,
Juve

12 April, 2013

A project file for CoffeeScript pojects

As you may know, I program a lot of UI stuff in CoffeeScript. To avoid recompilation, I usually tell coffee to watch for and recompile on changes: coffee -w -o ./lib -c ./src
In addition, I often start the coffee console to play around with some code interactively, e.g., to test the results of magic list and comprehension processing, etc. And I might also start a small static node.js webserver to serve files via HTTP rather than using file:// links.
Since this is a lot of stuff to start before I can even start programming, I usually write little project files (shell scripts) that will do all this automatically.

Here is an simple example that works for me in GitBash/Windows and in Linux (RedHat):
  1. It starts and detaches coffee -w and tracks the process id and group id
  2. Starts a node.js HTTP server and tracks the process id and group id
  3. Then starts the coffee console and waits for it to quit
  4. Finally it will bring down the all started programs and exit

#!/usr/bin/env bash

killGroup() {
    if [ -z "$1" ]; then
        echo "watcher gpid not set"
    else
        echo "killing gpid:$1"
        (sleep 1 && kill -- -$1)&
    fi
}
getGID() {
    ps='ps -eo pid,ppid,pgrp'
    $ps 1> /dev/null 2>&1 || ps='ps -l'
    $ps | awk "{ if (\$1 == $1) { print \$3 }}"
}

cwd=$(pwd)
pdir=$(dirname $0)

coffee -o $pdir/lib -wc $pdir/src&
watch=$!
gpid=$(getGID $watch)
echo "starting coffee watcher (pid: $watch, $gpid)..."

sleep 2

cd $pdir/..
coffee server.coffee&
server=$!
sgpid=$(getGID $server)
echo "starting coffee server (pid: $server, $gpid)..."

sleep 2
echo "starting coffee console..."
coffee

killGroup $gpid
killGroup $sgpid

exit 0
cd $cwd


After starting it looks like this:
$ sh project.sh
starting coffee watcher (pid: 10868, 17160)...
07:57:57 - compiled src\main.coffee
07:57:57 - compiled src\charts.coffee
07:57:57 - compiled src\datagen.coffee
07:57:57 - compiled src\algorithms.coffee
starting coffee console...
coffee>
The compiler output will be mixed into the coffee console output but that is just fine, since I do not have to maintain several console windows this way. Here is an example where I tested something on the console and then saved my main file.
coffee> a = [1,2,3]; a.map (d) -> value:d
[ { value: 1 }, { value: 2 }, { value: 3 } ]
coffee> 08:40:06 - compiled src\main.coffee
08:40:16 - compiled src\main.coffee
In a past post I said that bash syntax was awkward, I really have to revoke that statement now. Bash is really great for such tasks.

Cheers, Juve

15 March, 2013

Bibify all my PDFs

This week I wrote a nice little script that scans all my PDF files and spits out a magic bibtex file with their meta-data. Each entry in the bib file will provide a rough description of the PDF and two useful links:
  1. a google search link for the paper and 
  2. a local file link to the actual PDF.

This allows me to create quick and dirty draft papers and presentations with clickable references without the need to manually maintain any bib files or using any document managing tool.

Check out my GitHub page for the tool to read more!

Why did I write it?

I do not like big tools, enforcing some working pattern on my daily research. I tried some document managing tools before but was never happy.

I only use Freeplane to manage all my knowledge. This includes managing papers as well. Mind maps are just nodes and edges and everything can be easily restructured. That is very important to me.

When I read a paper I add a new node in my "Papers.mm" mind map, give it s short title and then summarize what I found inside and what the paper "can do for me". I also link the node to the PDF.

But now and then I need to discuss my findings with others, writing paper drafts and slide-ware. Since I use LaTeX/LyX for these tasks I need to create BibTeX files for my documents. This can be a lot of manual work, especially if you throw out many of the discussed references later on when the content becomes mature. I also have to dig out the referenced papers during the discussions.

Thus I realized that I need
  1. An automatic bib file for all papers in consideration
  2. Clickable links in my own papers and slides
And that is what the tool does. Crunch all PDFs to extract some words as "title" and add some "href" links in the BibTeX "note" attribute. It is quick and dirty but it works as expected. Either way, for a final paper I would eventually manually hand-craft a custom bib file anyway.

PS: The script also works in Windows via the bash provided by Git.

22 February, 2013

Caring for Good English Pronunciation

Today I had a conversation about the importance of well-pronounced English and why/if many Germans do not care about it. OK, I may only speak for my fellow IT guys, but here are some conclusions:
  • Some really don' t care.
  • Many are really busy doing business and don't have time to nurse their language skills.
  • Many just do not know better.
  • People who know better don't point the others at their mistakes, at least not often enough, since that might be seen as impolite.
  • Some false pronunciations are really common, such that people feel acknowledged when hearing and doing the same mistake over and over again.
Here is a collection of words containing the most common verbal attacks, I endured over the last 5 years. I tried to "write" down how to "speak" some of them for my German fellows. If in doubt, just click the word and listen to one of the speakers at dict.cc, or search the web.

Words of Pain
I will add more words here when my torturers start speaking again :-) and then point them to this article to help them improve their skills. In case I missed some commonly mispronounced words, just drop me a comment, thus I can add them.

Best Practice: Listen carefully, speak out loudly, repeat, repeat, repeat!

Cheers,
Uwe