Skip navigation

Having spent quite a few total hours in GDB over the past weeks it started to get boring to print *res->value.list->… etc over and over to see what weird value was harassing my tests. I had heard that GDB had gained Python support  some time ago and decided to write some useful tools for debugging XMMS2 related code.

This is now what it looks like when you print a xmmsv_coll_t:

(gdb) print *coll
$11 = {
  "attributes": {
    "type": "id"
  }, 
  "type": "XMMS_COLLECTION_TYPE_ORDER", 
  "idlist": [], 
  "operands": [
    {
      "attributes": {}, 
      "type": "XMMS_COLLECTION_TYPE_UNIVERSE", 
      "idlist": [], 
      "operands": []
    }
  ]
}

…and a regular xmmsv_t:

(gdb) print *fetch
$15 = {
  "type": "cluster-list", 
  "data": {
    "type": "organize", 
    "data": {
      "id": {
        "aggregate": "first", 
        "type": "metadata", 
        "get": [
          "id"
        ]
      }
    }
  }, 
  "cluster-by": "id"
}

The code is a bit under 100 lines of Python and should be a nice inspiration for people who still haven’t added this huge help to their projects. The code can be found here, and checked out via:

git clone git://git.xmms.se/xmms2/xmmsgdb.git

As it’s always been a bit too far behind the scenes I wanted to take some time to describe what measurements has been taken to increase the quality of XMMS2, and what the future has in stock.

Today we have a basic unit test framework built on top of libcunit. To reduce boiler plate code in the actual test suites a number of macros have been defined. Here is an example of the basic structure of a simple test suite:

SETUP (mytestsuite) {
  /* setup what ever is needed
   * for each test case to run
   */
}

CLEANUP () {
  /* clean up such that the state
   * is restored to the state before
   * SETUP (mytestsuite) was run.
   */
}

CASE (mytestcase1) {
  /* the actual test */
}

CASE (mytestcase2) {
  ...
}

To guarantee correctness SETUP will be executed before each CASE is run, and CLEANUP will be executed after each CASE has finished. Additionally the whole SETUP, CASE, CLEANUP is wrapped by the following checks both before and after:

VALGRIND_DO_LEAK_CHECK;
VALGRIND_COUNT_LEAKS(leak, d, r, s);

This imposes no runtime dependency but injects markers such that if the test is executed under Valgrind, each test case will be inspected for memory leaks independently, which causes that test case to fail if a leak is found.

That covers writing test cases and validating their resource management, next up is getting a clear view of what has been tested and this is where coverage reports come into play. To get coverage reporting, via gcov, the –coverage flag is appended to both CFLAGS and LINKFLAGS in the build system. When running the tests a heap of .gcda and .gcno files will be emited which among other things contains the metadata about what lines were executed. To produce something that’s easy to inspect lcov processes these files into a heap of HTML files using the following command line:

lcov -c -b $base-directory -d $metadata-directory -o coverage.info
genhtml -o cov coverage.info

The $base-directory in this case is used to resolve relative paths as our build system outputs its artifacts in a sibling directory of the source directory. So for example the source files will be called ”../src/xmms/medialib.c”, where ”..” is relative to ”_build_”. The $metadata-directory is the directory to recursively scan for .gcda files. See the man page for further details.

So we now know that our tests produce the correct result, they don’t leak, and we’ve verified via coverage that our tests cover the complex code paths we want them to. Already this gives us a lot of comfort that what we’re doing is correct, but there’s one more tool we can use to increase that comfort and that is the beautiful free static analysis tool from the clang project. To perform a static analysis of the XMMS2 source code simply issue the following commands:

scan-build ./waf configure
scan-build ./waf build

After a while the analysis is done and you will be presented with a command to run which opens your browser with the static analysis report. This is the latest addition to our tool chain which will help us to increase our code quality even further, so there are still some warnings of different severities left which should be fixed.

Now on to the future. While working on getting Collections 2.0 into shape I started working on a comfortable way of validating the correctness while getting to know the whole concept and the code behind it so that I could easily modify its structure without breaking things.

First step was to build the data structures via the C-API like clients would, and some basic validation of the result. This turned out to be pretty verbose as the whole data structures would be written out in code instead of generated from some kind of user interface. The first step was to write a small JSON parser that constructed a xmmsv_t which could be used to build the fetch specification, so that by looking at a test for a second you’d know exactly what the result would be. After this the next obvious step was to construct a xmmsv_t from JSON with the expected result. Here a vision of an entirely code-free test suite started to grow, and some lines of code later a xmmsv_coll_t could also be constructed from JSON.

The envisioned test-runner is not committed yet, but what it does is to scan a directory structure like this:

testcases/test_query_infos_order_by_tracknr/medialib.json
testcases/test_query_infos_order_by_tracknr/collection.json
testcases/test_query_infos_order_by_tracknr/query.json
testcases/test_query_infos_order_by_tracknr/expected.json
testcases/test_something_complex/medialib.json
testcases/test_something_complex/collection.json
testcases/test_something_complex/query.json
testcases/test_something_complex/expected.json

And for each directory under ”testcases” it performs the same task as the current test framework does, but now in a way that makes it easy for non C-coders to contribute new test cases.

A bonus here is that it’s easy to re-use this collection of test cases for other types of tests, such as performance tests, which actually already works. When running the suite in performance mode another directory is scanned for media libraries of different sizes (500, 1000, 2000, 4000, 8000, 16000 songs) on which each of the tests are executed, and performance metrics per test is dumped on stdout.

The idea is that these performance tests will emit data in a format that can be used for producing nice graphs based on different metrics. The script that produces the graphs would take as input a number of test-runs, so that you could easily compare multiple versions of the code to check for performance improvements and regressions.

So that’s it folks, if you have recommendations on further improvements, don’t hesitate to join the IRC channel for a chat, or perhaps drop me a mail.

Two years ago we started our journey to write what would become an enterprise server software in the Python language. Over time we’ve done some pretty nutty things that wouldn’t have been made if the Python VM wasn’t crap. The reason we started with Python was due to a constraint on how to communicate with a core component in the environment. In hindsight we probably should have written our own library from start (we have done so today), but it was also an interesting ride.

Like everyone else we noticed that Python becomes slower and slower for each thread you add, specially on SMP systems, thanks to the glorious Global Interpreter Lock. With the help of python-multiprocessing we later were able to take advantage of the 8 cores available to us, at the cost of copying a lot of data between processes (5-60 processes depending on configuration), and consuming a heap of RAM (16-24GB were not uncommon). To reduce the work of using multiprocessing, python-orb was created (which could do with a bit more polish, but it suits our needs).

Later on we noticed that our software pretty much crawled to a halt at a regular interval. At last we started to realize that this might be caused by the Python garbage collector. After some investigation this turned out to be the case, and we decided to just skip the garbage collector altogether as it only helps when you have circular references in your application (Python is otherwise reference counted), and those can be fairly easily circumvented.

Python being a dynamic language means that you pretty much have to make up for the rapid development and compact syntax with twice as many test cases (yes, your application will start with completely broken syntax, and typos until it’s time to execute that particular line of code). This is not really that bad as the tests too are rapidly developed, and you need to have tests to prove that your software does what you want even after a major refactoring.

At the time we found the problem we simply disabled the garbage collector in our test-framework and started logging gc.collect()’s after each test method had run. In addition to this, we added support for running the garbage collector on demand in our software so that we could run it for some hours with tons of data and then see if a gc.collect() returned something. Some days later we had nailed the last of the few cyclic references and were ready to run the whole application with the garbage collector disabled. Result was a lot better performance, and the end of stop-the-world garbage collections. Win!

The new version of our product relies on a much better virtual machine, namely the JVM, we do however still use Python a lot for non performance critical scripting, and for analyzing data and so on. During last week I analyzed a lot of data to locate a bug, this involved loading up a blob of JSON data and juggle it around until something interesting popped up (and it did!). This is a prime example of what disabling the garbage collector can do for you on a daily basis, so here it comes:

> import cjson, time, gc

> def read_json_blob():
>   t0 = time.time()
>   fd = file("mytestfile")
>   data = fd.read()
>   fd.close()
>   t1 = time.time()
>   parsed = cjson.decode(data)
>   t2 = time.time()
>   print "read file in %.2fs, parsed json in %.2fs, total of %.2fs" % \
>                                                   (t1-t0, t2-t1, t2-t0)

> read_json_blob()
read file in 10.57s, parsed json in 531.10s, total of 541.67s

> gc.disable()
> read_json_blob()
read file in 0.59s, parsed json in 15.13s, total of 15.72s

> gc.collect()
0

Ok, so that’s 15 seconds instead of about 9 minutes until I’m able to to start to analyze the data, and of course there was nothing for the garbage collector to collect afterwards. The file in question is a 1.2GB JSON text file, the disks perform at about 110MB/s sequential reads, and we have 8 cores of Intel Xeon E5520 2.27GHz to use (only one core used in this example).

I hope this saves someone elses time as it has saved mine.

About one and a half years ago I got tired of using Trac and started looking for alternatives. There were (are?) a lot of issues with Trac, but one of the more visible usability problems is that you write filters in SQL. As I’m accustomed to filters in a fire-and-forget fashion, from my years with the Mantis BTS, this doesn’t really work for me. The Almighty Google Machine led me to a heap of people recommending Redmine as a drop-in replacement, with nice import scripts. A couple of days later I’d created my first Redmine instance and have not since looked back.

We’ve also started using Redmine in my project at work, and now the other projects are getting jealous on our fancy setup, hence this post.

Pre-reqs: One piece of hardware with Debian Lenny installed.

First start with adding the Debian Backports Apt repository to your sources.list:

# echo "deb http://www.backports.org/debian lenny-backports \
            main contrib non-free" >> /etc/apt/sources.list
# aptitude update
# aptitude install debian-backports-keyring
# aptitude update

Next up you’ll need an Apache module with a very fancy web page, Passenger:

# aptitude -t lenny-backports install \
                    libapache2-mod-passenger

You’re also going to need some database to store your crap in. I’m just going to base this on MySQL as that’s the DB that was already running on those machines I run Redmine on, and there’s no specific reason why I select version 5.1 here either:

# aptitude -t lenny-backports install mysql-server-5.1

During the installation you’ll be asked to enter a password for the root account on the MySQL database server. If you’re out of ideas I can really recommend installing the pwgen package which will happily generate a secure password for you:

# pwgen -sy | cat

Armed with a MySQL database and a secure password it’s now time to create the Redmine database:

# mysql -u root -p
mysql> create database redmine character set utf8;
Query OK, 1 row affected (0.00 sec)
mysql> create user 'redmine'@'localhost' identified by 'my_password';
Query OK, 0 rows affected (0.00 sec)
mysql> grant all privileges on redmine.* to 'redmine'@'localhost';
Query OK, 0 rows affected (0.00 sec)
mysql> exit
Bye

…where you’d obviously use that fancy pwgen tool to generate yet another super secure password that you’ll forget before reading the rest of this text.

Armed with a database and a Ruby on Rails hungry Apache module you’re now ready to grab Redmine:

# cd /var/www
# wget http://rubyforge.org/frs/download.php/69449/redmine-0.9.3.tar.gz
# tar xvfz redmine-0.9.3.tar.gz

Now it’s time to remember that fancy password of yours:

# cd redmine-0.9.3
# cat <<EOF > config/database.yml
production:
  adapter: mysql
  database: redmine
  host: localhost
  username: redmine
  password: my-sikritt-passw0rd
EOF

Ok, so now Redmine is configured to access the database, but Rails is missing, lets grab it:

# gem install rails -v=2.3.5
# aptitude install libopenssl-ruby libmysql-ruby

Got Rails! Next up, prepare Redmine, and then populate the database:

# RAILS_ENV=production rake config/initializers/session_store.rb
# RAILS_ENV=production rake db:migrate
# RAILS_ENV=production rake redmine:load_default_data

The last step here will ask for the default language, select something you can understand.

Ok, we’re getting closer to actually run Redmine for the first time. The following steps will hook up Redmine to be run by Apache:

# chown -R www-data:www-data files log tmp public/plugin_assets
# mv public/.htaccess public/.disabled_htaccess
# cat <<EOF > /etc/apache2/sites-available/redmine
<VirtualHost _default_:80>
 ServerName your.domain.name
 DocumentRoot /var/www/redmine-0.9.3/public
 RailsEnv production
</VirtualHost>
EOF
# a2ensite redmine
# /etc/init.d/apache2 restart

When directing your browser to http://your.domain.name Redmine will present itself. You should of course make sure that the rest of your Apache installation works properly now and no strange directories are exposed to evil visitors, but otherwise you should be good to go.. enjoy!

The non-smart-phone world seems so distant now after being connected to The Hive<tm> around the clock for a little more than a year with the HTC Dream/Android G1. It’s not the best of phones, but it was the first and I can’t really say any other Android-based phone has impressed me much. There is some hope for the rumored Motorola Shadow but nevermind, this post is about the applications I’ve grown to love.

There are a couple of applications in my daily life, but some applications stand out more than others.

  1. ConnectBot
    This is hands down the best SSH-client on-the-go that I’ve ever used. It supports keys, multiple concurrent sessions, it hooks up one of the hardware buttons to switch between the windows in GNU Screen. Gesture support involves scrolling Up/Down in the buffer or sending Page Up/Down depending on if you touch the left or right part of the screen. The trackball is Ctrl which makes using a shell with high latency a breeze. There are bookmarks, and you can even tunnel ports to the phone which is really nice if you have some web-page hidden inside some network or something. Simply put, pure awesomeness. It’s not uncommon I start my work day on the bus with this application.
  2. Google Listen
    I never really cared about podcasts before, but this completly changed when I found this wonderful application. With flat rate data subscription, and the podcasts being downloaded to the phone, or streamed as you listen, this sweet application makes podcasts really accessible. The only annoying thing is that it continues to play new podcasts in queue with no way of stopping after only one, which causes me to wake up with strange voices in my head in the middle of the night. Another feature some iPhone fanboy friends of mine have in their podcast clients is the ability to increase speed, which would be very nice when listening to The Economist podcast. My current list of poison can be found here.
  3. Twidroid
    I wasn’t really into twitter until I found this application. Haven’t tried many other as I don’t feel limited with this one. It’s not mega awesome, but it’s well written and does its job well. It supports all the features you’d expect, it updates tweets in the background, it supports URL shortening services, photo sharing services, it hooks into the Share-feature in Android etc.
  4. Google Sky Map
    Using the accelerometer to navigate, GPS to fetch your position, it presents to you with a 3D map of the universe around you. As a typical Swede I could only spot the Big Dipper and perhaps Orion’s Belt so for me this app is a big +1. A dark night last summer I found myself amazed by having augmented my reality with the ability to see the stars that were right under me, only visible from other parts of earth. A must see at least.
  5. FxCamera
    A pretty simple but neat camera application that applies some fancy filters to your otherwise crappy photos. It’s a nice addition when you snap a photo and upload it to Facebook or Twitter directly from your phone. Features Toy Camera, Polaroid etc.
  6. Google Reader
    Ok, not really an Android application, but it is a custom version for mobile use, and I use it a lot while I travel by bus, or just being too lazy to grab my laptop. Very effective way of getting your daily dose of from The Hive<tm>.

So with the mentioned applications I’m pretty satisfied with the whole Android experience. The only area that’s currently lacking is in Tower Defense games, but that’s probably just a matter of time, and it’s probably good that there aren’t any worth playing yet ;).

As for firmware customizations I’ve done some experimentation. At first I used the JesusFreke firmware, which got discontinued, next up was CyanogenMod which was all the rage the whole autumn, and I recently switched to OpenEclair which is a rock solid Android 2.1 version for the G1 that I’m really satisfied with.

It’s nice to see that such a large community of hackers have spawned around the Android project and I hope it grows even more. I haven’t had the time to get involved myself yet, except for some half-assed attempt to play with Scala, and a small XMMS2 client just to get the feel for the API. Hopefully time permits future adventures into Android-hacking, I still have hopes, and it looks like Android is here to stay.

So to sum it up, I’m really satisfied with Android, although I find it a bit sad that no manufacturer have yet to come even close to the iPhone touchscreen performance  (although S-E X10 Mini is pretty close, unfortunatly with a molested UI).

In 2006 my girlfriend bought herself a MacBook, one of those white ones, pretty, easy to use, and all was well. About two years later the laptop started acting weird. I shut down even though there was still a lot of charge left in the battery and other strange symptoms. A call to Apple and a quick battery check in the store and she got a new battery thanks to the battery exchange program they had running back then, almost no questions asked, and all was well again.

A while ago the battery started acting up again. We came home from a short vacation and the battery icon had a cross over it and the battery didn’t charge. I got on the phone with Apple and they of course answered that I was SOL, but after refusing to accept that they told me to go to an Apple Support store to test if the battery was depleted, or defect. Needless to say it was defect, it had gone from acceptable performance to no performance in the blink of an eye.

Ok, so with the blessing of an Apple technician I called Apple again and now things started to get strange. The support now told me batteries were something you used up and that this battery too was used up even though the technician said otherwise. After a while I had the support guy accept that Apple didn’t manufacture their batteries to suddenly die after a years usage, but rather become less and less able to hold the charge. Based on this acceptance I tried pointing out that Konsumentköplagen (law to protect consumers here in Sweden) protected me from manufactoring errors, and as we both agreed that the battery was incorrectly manufactured, as determined by their party, this would give me the right, and according to me, right to a new battery.

This convincing had taken a while and the support guy was definatly not interested in Konsumentköplagen nor talking with me, so he redirected me up one level after he had explained the case to the next guy.

The next guy had been told by his managers that batteries were something you used up, and thus the Konsumentköplagen didn’t apply, but when asking for a legal reference to his statement that batteries was specifically not covered by the Konsumentköplagen he got a bit defensive, specially after me pointing out that the first hit on Google has the title ”Apple doesn’t care about Konsumentköpslagen”. After a short battle he sent me one step up to something he explained to be their office for more law-related questions.

This time a Danish girl answered and the conversation continued in english and she didn’t seem to have ever heared of the Konsumentköplagen, but was kind enough to give me a 30% discount code on a new battery from Apple Store. I accepted this as the alternative according to her was to talk to their lawyers, and that seemed like a too big effort considering the price of a new battery.

I’m still not completly sure who was right in this case, they never explicitly said that I was wrong. The Konsumentköplagen says that’s it up to the consumer to prove the manufactoring error, but as their technician had determined this already I belive that I was right, and I have still not found any explaination to the relation between Konsumentköplagen and batteries.

Installed the Last.FM player on my Android yesterday and I suddenly knew that I had just taken a leap into the future.

The days of myPod‘s and other digital music players are reaching their end. I realise that while writing this, millions of people sit in front of their computers pushing music into their little gadgets before heading out in the street or whatever, and you know what…

<blink>THEY’RE DOING IT WRONG!</blink>

What they should do is to get themselves an Android or iPhone, install the Last.FM app, just head outside and click the ”xxxx’s Library” station and enjoy song after song they probably just want to hear right now, for free! The whole world of music instantly available from that Internet enabled device in their pocket.

<blink>THIS IS THE FUTURE! IT’S HERE!</blink>

It’s has also revolutionized my use of Last.FM. As I’m using it on the go I’m not really doing anything other than listening to music so it brings me closer to the music in a whole new way.

If I hear a song it’s really easy to just pick up the phone from my pocket and press the ❤ button, maybe skip to the next song if the current one doesn’t fit my mood, or if the song reminds me of someone, the share button is just a click away, both share to email from the android contacts list, or to my friends at Last.FM.

I haven’t been this excited about a piece of software since I first started using Xbox Media Center, this IS the best thing since sliced bread (or XBMC in this case).

It was some time between Christmas and New Years last year I carried my Technics SL-1200MK2 home in my arms. Now a year later I can say that it’s hands down one of my best investments ever, I mean.. just look at it..

The first 6 months it wasn’t used more than perhaps once a week, but when moving to my new apartment it was the easiest source of music for a long time due to the caos of getting things to their proper place. When finally getting the apartment into order the habit was already there. I’ve listened to almost no digital source of music for the past 6 months when being at home, and my collection is close to filling my first box. When spending most of my awake time in front of a computer it feels very relaxing to kick back in the sofa to a really great analog music experience.

Here’s the list of my current albums, with albums I’ve listened to the most marked as bold:

  • Metallica
    • Ride the Lightning
    • …and Justice for All
    • Kill ’em All
    • Master of Puppets
  • The Beatles
    • Abbey Read
    • White Album
    • Sgt Pepper
    • Magical Mystery Tour
  • Peps Persson
    • Rotrock
    • Persson sjonger Persson
  • Bob Marley
    • Kaya
    • Uprising
  • Jimi Hendrix
    • Band of Gypsys
    • Electric Ladyland
  • Jim Morrison
    • An American Prayer
  • The Doors
    • The Soft Parade
    • Absolutely Live
    • Waiting for the Sun
    • LA Woman
  • Glen Miller
    • Story
    • A Memorial 1944-1969
  • U2
    • Under the Joshua Tree
  • Nirvana
    • Nevermind
  • Mikael Wiehe
    • Kråksånger
  • The Human League
    • Reproduction
  • Plasticman
    • Sheet One
    • Closer
    • Musik
  • Black Sabbath
    • Black Sabbath
    • Sabbath Bloody Sabbath
    • Paranoid
  • Pink Floyd
    • The Wall
    • Dark Side of the Moon
    • Atom Heart Mother
    • Wish You Were Here
    • A Saucerful of Secrets
    • Animals
  • Front 242
    • Tyrrany For You
  • Kraftwerk
    • Autobahn
    • The Man Machine
    • Radio-Activity
  • Bo Hansson
    • Ur Trollkarlens Hatt
    • Sagan om Ringen
    • El-ahrairah
    • Mellanväsen
  • The Stooges
    • No Fun

If you’re thinking about getting a vinyl player but haven’t made the final decision yet, then doit! You will not regret it.

Two weeks ago it happened again. The annual synth music festival in Malmö, electricXmas. The evening started with some boozing up at the office with a crowd of 10 persons or so, lots of nice music and chattering before we hit the club. This years lineup was pretty nice: Biomekkanik, Autodafeh, Agonize, Interlace and Welle:Erdball. I wasn’t really interested in anything other than Interlace and Welle:Erdball, but these two artists had each really great sets.

Also.. Welle:Erdball also threw out their instrument of choice into my drunk arms:

C64

Unfortunatly it doesn’t seem to work properly after that crazy night, or I’ve failed to tune in the TV correctly (although I can easily find my other C64 on this TV). Next up is probably to switch over my other C64 into this new and signed chassi and get it to actually play some Welle:Erdball sids. Pretty nice to have a dedicated Welle:Erdball-computer 🙂

Right, and I at least requested vinyl versions of all of Interlace’s albums, the answer was that it was interesting, and that it might happen. Enough for me to keep on hoping.

Update:
I just ordered a 1541 Ultimate so in a not too far future there will be Welle:Erdball playing on the real shit!

GSoC Mentors Summit in all glory, but all sessions and no hack made drax and me dull boys… enter Skidbladnir to bring joy to life!

After a day of slow sessions, me hacking on Abraca, while drax hacking on a new web 2.0 client we decided that enough was enough, time to get some collaboration going.

I actually came up with the idea a really long time ago, while Service Clients was just an vague idea in the minds of drax, theefer, and the wanderers.

As I live in Sweden, home of the fast Internets, I know that a whole lot of people would be very happy if their favorite music player had easy access to, everyones favorite, The Pirate Bay for getting more content.

A typical scenario would be that I was playing some song by Timbuktu, and my music player would automagically notice that I’m missing that new single that Timbuktu, one of Swedens most popular artists, officially released first to the world on The Pirate Bay *hint hint hint all other artists* and then present a link to that torrent for me to click on, and download using my favorite torrent client.

This feature is so hot that ALL XMMS2 clients should have it, thus we wanted to do this as a Service Client.

So late saturday afternoon just before we left Googleplex I started to update the xmmsclient python bindings to match the Service Client branch my student had written during GSoC. Meanwhile drax was working on getting his webclient ready and some helpers to count string distance between Freebase data and some mock Pirate Bay torrent names. Due to jetlag my evening ended early for me, but when waking up somewhere around 3AM I had a great message from The Pirate Bay waiting for me about getting early access to their upcomming webservice API. The rest of the sunday was spent frantically hacking the python bindings so that we could have a running demo before I had to leave for the airport and it worked! Around 2.45PM we made the first working request from the service client and I ran to the bus.

So to summarize what this client does:

  1. Register as a service client that accepts an artist (string) as argument.
  2. Accept request.
  3. Find albums by artist in medialibrary.
  4. Find albums by artist in Freebase.
  5. Find albums by artist in The Pirate Bay.
  6. Subtracts the albums in medialibrary from the albums returned by Freebase.
  7. Calculates string distance from what’s left of Freebase result with The Pirate Bay result to get good names pointing to correct but crappy torrent names.
  8. Return a list of albums missing in the medialibrary by some artist, with links to download.

Right.. and the name Skidbladnir refers to the ship of Freyr that sails the Scandinavian intern^H waters with fair wind, and folds easily into ones pocket.