Pages

12 February, 2014

Using Gimp Images (xcf files) directly inside LyX

Long ago, I was tired manually converting images to PDF, for papers etc. And I have been using SVGs directly inside LyX, which works quite nice. For my current work I also need some pixel images and composed them in Gimp. Gimp puts out XCF images that contain all layer information, etc.

Unfortunately, LyX (2.0.7) cannot fully handles these images out of the box. The preview is fine, but when generating PDFs LyX/LaTeX fails. To fix this you need to do add following settings in LyX:

1. Preferences -> File Handling -> File Formats -> [New] ->
Format: XCF
Short Name: Gimp Image
Extension: xcf

2. Preferences-> Converters:
From: XCF, To: PNG
Converter: convert -layers merge $$i $$o

This will flatten the file, so make sure your current layer arrangement, etc. looks like your desired final out put. Note that there is  no need for an additional XCF to PDF setting, since the internal PNG to PDF converters will be for that.

Cheers,
Juve


23 October, 2013

Adding JSON quotes with Vim

I just had some bad JSON and quickly replaced the bad text with good text using my magic Vim editor:

The bad code looked like:
{
    prop: "value"
}

But JSON requires:
{
    "prop": "value"
}

Here is the command I used in Vim:

%s#\([^"\t ][a-zA-Z0-9_-]*\):\([\t ]\+\)#"\1":\2#gc

Ciao,
Juve

12 April, 2013

A project file for CoffeeScript pojects

As you may know, I program a lot of UI stuff in CoffeeScript. To avoid recompilation, I usually tell coffee to watch for and recompile on changes: coffee -w -o ./lib -c ./src
In addition, I often start the coffee console to play around with some code interactively, e.g., to test the results of magic list and comprehension processing, etc. And I might also start a small static node.js webserver to serve files via HTTP rather than using file:// links.
Since this is a lot of stuff to start before I can even start programming, I usually write little project files (shell scripts) that will do all this automatically.

Here is an simple example that works for me in GitBash/Windows and in Linux (RedHat):
  1. It starts and detaches coffee -w and tracks the process id and group id
  2. Starts a node.js HTTP server and tracks the process id and group id
  3. Then starts the coffee console and waits for it to quit
  4. Finally it will bring down the all started programs and exit

#!/usr/bin/env bash

killGroup() {
    if [ -z "$1" ]; then
        echo "watcher gpid not set"
    else
        echo "killing gpid:$1"
        (sleep 1 && kill -- -$1)&
    fi
}
getGID() {
    ps='ps -eo pid,ppid,pgrp'
    $ps 1> /dev/null 2>&1 || ps='ps -l'
    $ps | awk "{ if (\$1 == $1) { print \$3 }}"
}

cwd=$(pwd)
pdir=$(dirname $0)

coffee -o $pdir/lib -wc $pdir/src&
watch=$!
gpid=$(getGID $watch)
echo "starting coffee watcher (pid: $watch, $gpid)..."

sleep 2

cd $pdir/..
coffee server.coffee&
server=$!
sgpid=$(getGID $server)
echo "starting coffee server (pid: $server, $gpid)..."

sleep 2
echo "starting coffee console..."
coffee

killGroup $gpid
killGroup $sgpid

exit 0
cd $cwd


After starting it looks like this:
$ sh project.sh
starting coffee watcher (pid: 10868, 17160)...
07:57:57 - compiled src\main.coffee
07:57:57 - compiled src\charts.coffee
07:57:57 - compiled src\datagen.coffee
07:57:57 - compiled src\algorithms.coffee
starting coffee console...
coffee>
The compiler output will be mixed into the coffee console output but that is just fine, since I do not have to maintain several console windows this way. Here is an example where I tested something on the console and then saved my main file.
coffee> a = [1,2,3]; a.map (d) -> value:d
[ { value: 1 }, { value: 2 }, { value: 3 } ]
coffee> 08:40:06 - compiled src\main.coffee
08:40:16 - compiled src\main.coffee
In a past post I said that bash syntax was awkward, I really have to revoke that statement now. Bash is really great for such tasks.

Cheers, Juve

15 March, 2013

Bibify all my PDFs

This week I wrote a nice little script that scans all my PDF files and spits out a magic bibtex file with their meta-data. Each entry in the bib file will provide a rough description of the PDF and two useful links:
  1. a google search link for the paper and 
  2. a local file link to the actual PDF.

This allows me to create quick and dirty draft papers and presentations with clickable references without the need to manually maintain any bib files or using any document managing tool.

Check out my GitHub page for the tool to read more!

Why did I write it?

I do not like big tools, enforcing some working pattern on my daily research. I tried some document managing tools before but was never happy.

I only use Freeplane to manage all my knowledge. This includes managing papers as well. Mind maps are just nodes and edges and everything can be easily restructured. That is very important to me.

When I read a paper I add a new node in my "Papers.mm" mind map, give it s short title and then summarize what I found inside and what the paper "can do for me". I also link the node to the PDF.

But now and then I need to discuss my findings with others, writing paper drafts and slide-ware. Since I use LaTeX/LyX for these tasks I need to create BibTeX files for my documents. This can be a lot of manual work, especially if you throw out many of the discussed references later on when the content becomes mature. I also have to dig out the referenced papers during the discussions.

Thus I realized that I need
  1. An automatic bib file for all papers in consideration
  2. Clickable links in my own papers and slides
And that is what the tool does. Crunch all PDFs to extract some words as "title" and add some "href" links in the BibTeX "note" attribute. It is quick and dirty but it works as expected. Either way, for a final paper I would eventually manually hand-craft a custom bib file anyway.

PS: The script also works in Windows via the bash provided by Git.

22 February, 2013

Caring for Good English Pronunciation

Today I had a conversation about the importance of well-pronounced English and why/if many Germans do not care about it. OK, I may only speak for my fellow IT guys, but here are some conclusions:
  • Some really don' t care.
  • Many are really busy doing business and don't have time to nurse their language skills.
  • Many just do not know better.
  • People who know better don't point the others at their mistakes, at least not often enough, since that might be seen as impolite.
  • Some false pronunciations are really common, such that people feel acknowledged when hearing and doing the same mistake over and over again.
Here is a collection of words containing the most common verbal attacks, I endured over the last 5 years. I tried to "write" down how to "speak" some of them for my German fellows. If in doubt, just click the word and listen to one of the speakers at dict.cc, or search the web.

Words of Pain
I will add more words here when my torturers start speaking again :-) and then point them to this article to help them improve their skills. In case I missed some commonly mispronounced words, just drop me a comment, thus I can add them.

Best Practice: Listen carefully, speak out loudly, repeat, repeat, repeat!

Cheers,
Uwe

14 February, 2013

I Love Free Software

I love Free Software!

There is an FSFE campaign to express your love for free software. Let me use that opportunity to tell you what software I love and use everyday.

First of all, there's a piece of software built upon GNU/Linux and Debian called Ubuntu. Ubuntu is the only operating system I have installed on my PC and I do not need anything else. No dual-booting with Windows. My old useless Windows XP edition on my second SSD had long made space for pictures and videos of my family and my little baby boy.

From my shiny Linux PC I regularly start a variety of free, open-source programs.

  • Firefox - I guess you know that one, from which I am just writing these lines. I love it and it gets better everyday.
  • Inkscape - Helps me with ANY "arrows and boxes" and (scientific) line drawing tasks, esp. for my Job. I use it instead of PowerPoint, since it is much easier to use and much more powerful. It never is in my way, and never tries to be smarter than me, which I think is critical for creativity software. At home I also use it to design greeting cards or stitching templates for my mother.
  • Gimp - A powerful image manipulation tool, that helps me retouch fotos and create art, such as the following Undying retouch, Supernova image, and fine Pixel Art:

       

    PS: Gimp 2.8 just got better with single window mode!
  • LyX/LaTeX - If you ever consider writing a scientific document (paper, thesis, etc.) use LyX! LyX uses LaTeX which can make your document look very professional and LyX frees you from all the technical details that you need consider when using plain LaTeX.
  • Vim - Is one of the most advanced text editors out there! I use it for writing sophisticated programs and websites in JavaScript, CoffeeScript, and Shellscript. Beware! Vim is not for the faint of heart. You first have to throw away your old, well-known text editing metaphors and learn the Vim way from scratch. Do so and you will love it.
  • Eclipse - This is a really big one. Even though I prefer to write code in Vim, there a some cases where a more integrated tooling can be useful. I use it mainly for writing programs for the Java VM.
  • Wine - This is a two-edged sword. On the one hand, Wine is free, open-source software that I have been using since the day I started to use Linux in 2006. On the other hand, Wine's only purpose is to run mostly, non-free Windows-based software under Unix/Linux/MacOS. I use it mainly for gaming. It helps me to run very old Windows Games, such as Fallout 1 that may not even run out of the box on Windows itself anymore. I also use it to run very recent games, such as Diablo III, or Drakensang II, or Torchlight.
I guess these are the most important free, open-source programs that I use very frequently. Here is a list of additional software that I also love and that I do not want to miss.

  • Gnome - My Desktop of choice. Yes, in GNU/Linux you are free to chose from a variety of Desktops.
  • Meld - A really nice tool to compare files.
  • Git - A modern, light weight, version control system. I also use it to backup my data to/from file shares and USB drives.
  • SumatraPDF - Very fast and lightweight PDF viewer. Much faster than the bloated Adobe Reader.
  • Mawk, Screen, and similar Core Utils - Perfect tiny tools that ease my everyday work in the office.
  • Freemind/Freeplane - Mindmapping tools that help me organize my ideas and thoughts.
  • Stellarium - Nice, ejoyable night sky and universe explorer
  • Andor's Trail - Very addicting, open-source RPG for Android
  • Battle for Wesnoth - Awesome, extensive, community-driven turn-based strategy game
If you have any questions regarding one of these open-source programs/projects and how I use them or how you can benefit from them, just drop me a note or leave a comment here in the blog.

That's it about free, open-source software that I love nearly as much as I love my wife, my little baby boy, and my cats. ;)

Best wishes,
Uwe Jugel

29 October, 2012

Making regular expressions more readable in shell scripts

Today I was wondering why using sed in shell scripts feels so awkward.
I wanted to fix that and posted a question on stackoverflow.

First improvement: You can use custom separators to avoid escaping /

With that,
sed 's/\/\*\s*APPEND_NAME.*\s*\*\/\(.*\)\/\*\s*.*\s*\*\/:\1'$value':g'
becomes
sed 's:/\*\s*APPEND_NAME.*\s*\*/\(.*\)/\*\s*.*\s*\*/:\1'$value':g'

A bit more readable,  but still I was not happy and wanted to got a little further. Here is my solution for replacing several kinds of comments/annotations in a template file. The file looks like this (excerpt):
/*!ONCE*/IMPORT 'schemas.ccl';
/*!ONCE*/CREATE INPUT WINDOW SimpleTick
/*!ONCE*/SCHEMA TickSchema PRIMARY KEY (t_key);

CREATE WINDOW    /*APPEND_NAME     */TickFilter/* */ PRIMARY KEY DEDUCED
KEEP             /*WINDOW_FULLSIZE */11000     /* */ MILLISECONDS AS
SELECT * FROM SimpleTick st
I want to replace the /*COMMENTS*/ with real values, which is actually quite easy using cat, grep, and sed. But the many sed expressions needed for that task would still contain many escaped characters, such as \*, \s, and \(, \); even when using colons instead of slashes for expression separators.

Second improvement: Outsource all redundancy!
By writing common sub expressions in variables or even functions, and composing the expressions from them, you gain much more readable sed statements. Here is my full sed/grep readability approach:
CS='/\*\s*'
CE='\s*\*/'
REF='\(.*\)'

wrapex(){
    echo "$CS$*.*$CE"
}

replaceVariable(){
    local name=$1
    local value=$2
    sed "s:$(wrapex $name):$value:g"
}
replaceString(){
    local name=$1
    local value=$2
    sed "s:$(wrapex $name):'$value':g"
}

appendVariable(){
    local name=$1
    local value=$2
    sed "s:$(wrapex $name)$REF$(wrapex):\1$value:g"
}

applyTemplateHeader(){
    egrep "!ONCE" | sed "s:$(wrapex "!ONCE")$REF:\1:g"
}

removeComments(){
    sed '\:'$CS'#:d' | sed '\:#'$CE':d' | sed '\:\s*\*#:d'
}
In the end, using such sed helper functions is quite easy and the code stays readable:
replaceVariable WINDOW_FULLSIZE 100
appendVariable  APPEND_NAME     "_A"

Cheers,
Juve