Blog Entries tagged software
Feeds: RSS | Atom

Continous Integration with Hudson - embarrasingly simple!

Published: 2011-01-27 19:24 UTC. Tags: open source software testing

I'm working on a rather large reporting and analytics application that runs on top of Hadoop at work. It has tests. A whole bunch of them, actually. That's good.

So far, we've been running the tests manually when making new releases. But doing it more often is always better, since it gives you an indication on when things went wrong, and also forces you to keep your tests in a state where they pass. Some people call it Continous Integration.

Now, you can do all the work getting your builds to build and run tests yourself, via cron and scripts and other types of messiness. Or you can try an existing solution. Today I decided to try Hudson.

That turned out to be embarrasingly simple to get started with. Basically, it's a matter of:

  1. Downloading hudson.war from their site.
  2. Start it by running java -jar hudson.war
  3. Go to http://localhost:8080 with a web browser of your choice. That would be Opera in my case. You have to eat your own dog-food.
  4. Go to the Hudson management screen and enable the git plugin
  5. Setup a new project. Tell it where the code is and on which branch.
  6. Configure what commands to run to build and test. Make the test command output an xunit xml file.
  7. Tell Hudson where that xml file is.

Result: Hudson will periodically poll git and run my build and test commands, then show a changelog and what tests failed. All this after 30 minutes of setup time. I'm impressed.


Forsberg's Law on Cron Jobs

Published: 2010-02-19 09:45 UTC. Tags: software

They never work as intended the first four times you run them.


Deleting Amazon S3 buckets using Python

Published: 2009-08-09 10:38 UTC. Tags: software misctools

For a while, I used Duplicity to make backups to an Amazon S3 bucket. That kind of worked, but I had to do a lot of scripting myself to get it working automatically, so after finding out about Jungledisk, I switched to that. Jungledisk has a nice little desktop applet that keeps track of doing my backups while my computer is on, etc. That's convenient.

Anyway, the Duplicity/S3 experiments left me with an Amazon S3 bucket with about 9000 objects. Getting rid of that proved to be something of a challenge - you have to delete all objects inside the bucket before you can delete the bucket itself, and there's no API call for doing that. I also tried the web application for managing buckets, S3FM but that didn't cope too well with that many objects - my web browser just hung.

I have to admit I could have put more effort into googling before solving it by writing my own script - but writing my own script was more fun :-).

My script managed to delete all 9000 objects without trouble, although it did take quite a while to complete - I let it run overnight.

If you need to do the same thing, it's available here:

StackOverflow has several other solutions:


Are happy programmers dangerous?

Published: 2009-02-26 21:18 UTC. Tags: software

I went to a seminar about Scrum today at ENEA. It was one of those "let's have a seminar and then give people free food and beer so they buy more consultants from us" type of events.

Even more interesting was the questions after the seminar. Lot's of people from different tech companies in Linköping. Someone said that Scrum kept the programmers happy which would produce better code. That's probably true. Here comes the fun part - another person in the audience were worried that happy programmers would code things they thought were fun instead of the things they were supposed to do.

Hmm.. yeah, right. That's the way it works. Or maybe not! I'd say that the risk is much bigger that bored programmers spend their time working at things they shouldn't do.

I would really like to know where this person works, so I can avoid working there.


Weird Django Bug

Published: 2008-12-04 17:36 UTC. Tags: software django

I think I hit Django bug #6681 yesterday. It's the kind of bug that triggers only if three completely different conditions are met at the same time, where at least one of them depends on timing. In this case:

  • The Apache process must be recently started.
  • The first request hitting the Apache process must be for a page that requires reST rendering.
  • The reST must have a section that triggers the default interpreted role.

For reference, the exception was:

AttributeError: Values instance has no attribute 'default_reference_context'

Migrating Plone Sites to Django

Published: 2008-11-01 20:50 UTC. Tags: software plone django noplone

As mentioned earlier on this blog, I have converted this site from Plone to Django.

The conversion included migrating most of the data from the Plone instance's ZODB (Zope Object Database) into Django's ORM.

The hard part of that process is to get the data out of ZODB as the format depends completely on which Plone products you have been using. You need to check the schema for each product for which you want to extract data to get the field names, and you need to write code for extracting the data from each type of content type and add a new object in Django's ORM, translating data from one format to another in some cases.

In my case, I wrote a script that takes care of:

  • Document and Folder objects from Plone's standard contenttypes.
  • Blog entries in a Quills blog.

The script will traverse all Document, Folder and Blog entries and extract their data, adding instances of Django models from two custom Django products I have written.

A second script will then read the data from Django's database and modify the URLs in <img> tags, downloading the images from the Plone site via HTTP to a directory which will be configured as MEDIA_ROOT in Django. The script does the same for <a> tags that refer to images, .tar.gz files, etc. that also were stored in the ZODB of the Plone instance.

Each site that wants to do a conversion from Plone to Django will have to write their own script, as the set of products used in the Plone instance is site-specific, and the set of applications used in Django is also site-specific. However, I have made my conversion scripts available via subversion in the hope that they can serve as an example.

To access the scripts, either check them out with your subversion client. Example:

git clone git://

The scripts are in the plone2django_migration subdirectory.

You can also browse them here:


Inline links in reStructured Text

Published: 2008-11-01 20:26 UTC. Tags: software

I've been using reStructured Text for documentation purposes for a couple of years, but I have never read the full specification, only the quick reference, and I probably read that rather.. quick. So quick indeed that I have always been irritated because I couldn't find how to write hyperlinks without having to list them at the end of the document.

But today I found it. So here it is, for my future reference:

Add a `hyperlink to reST <>`_

That's the quick-n-dirty variant. Here's the perhaps more readable, but more cumbersome approach that I've been using until now:

Add a `hyperlink to reST`_

.. _hyperlink to reST:

ViewVC Django Integration

Published: 2008-10-25 20:14 UTC. Tags: open source software django version_control

I've written a Django product which helps integrating ViewVC in a Django Site.


A new Toy

Published: 2008-10-14 16:51 UTC. Tags: software

Post-Vacation Coding

Published: 2008-08-12 16:57 UTC. Tags: software humor

I came back to work yesterday after four weeks of vacation. Not doing much coding during such a long period of time has its effects. Here's the first line of a shell script I wrote yesterday afternoon:


Upgrading Wordpress from 2.0.6 -> 2.5 - not a smooth experience

Published: 2008-05-13 18:40 UTC. Tags: software wordpress

My girlfriend runs a wordpress-powered blog. Yesterday, she decided to upgrade it from 2.0.6 to 2.5. A good decision given the fact that the old version had a known security vulnerability.

Most of the upgrade went fine, but it left her with a mysterious trouble - blog entries, links etc. could all be edited, but trying to edit existing and published pages rendered an error message, "You are not allowed to edit this page."

After some serious head-scratching, I found this post that led to this forum post, that gave a rather mysterious hint on what needed to be done to fix the problem. It turns out that the upgrade had failed to update the row with option_name = wp_user_roles in the wp_options table.

So, the following SQL statement fixed the problem for me:

UPDATE wp_options set option_value = 'a:5:{s:13:"administrator";a:2:{s:4:"name";s:13:"Administrator";s:12:"capabilities";a:47:{s:13:"switch_themes";b:1;s:11:"edit_themes";b:1;s:16:"activate_plugins";b:1;s:12:"edit_plugins";b:1;s:10:"edit_users";b:1;s:10:"edit_files";b:1;s:14:"manage_options";b:1;s:17:"moderate_comments";b:1;s:17:"manage_categories";b:1;s:12:"manage_links";b:1;s:12:"upload_files";b:1;s:6:"import";b:1;s:15:"unfiltered_html";b:1;s:10:"edit_posts";b:1;s:17:"edit_others_posts";b:1;s:20:"edit_published_posts";b:1;s:13:"publish_posts";b:1;s:10:"edit_pages";b:1;s:4:"read";b:1;s:8:"level_10";b:1;s:7:"level_9";b:1;s:7:"level_8";b:1;s:7:"level_7";b:1;s:7:"level_6";b:1;s:7:"level_5";b:1;s:7:"level_4";b:1;s:7:"level_3";b:1;s:7:"level_2";b:1;s:7:"level_1";b:1;s:7:"level_0";b:1;s:17:"edit_others_pages";b:1;s:20:"edit_published_pages";b:1;s:13:"publish_pages";b:1;s:12:"delete_pages";b:1;s:19:"delete_others_pages";b:1;s:22:"delete_published_pages";b:1;s:12:"delete_posts";b:1;s:19:"delete_others_posts";b:1;s:22:"delete_published_posts";b:1;s:20:"delete_private_posts";b:1;s:18:"edit_private_posts";b:1;s:18:"read_private_posts";b:1;s:20:"delete_private_pages";b:1;s:18:"edit_private_pages";b:1;s:18:"read_private_pages";b:1;s:12:"delete_users";b:1;s:12:"create_users";b:1;}}s:6:"editor";a:2:{s:4:"name";s:6:"Editor";s:12:"capabilities";a:34:{s:17:"moderate_comments";b:1;s:17:"manage_categories";b:1;s:12:"manage_links";b:1;s:12:"upload_files";b:1;s:15:"unfiltered_html";b:1;s:10:"edit_posts";b:1;s:17:"edit_others_posts";b:1;s:20:"edit_published_posts";b:1;s:13:"publish_posts";b:1;s:10:"edit_pages";b:1;s:4:"read";b:1;s:7:"level_7";b:1;s:7:"level_6";b:1;s:7:"level_5";b:1;s:7:"level_4";b:1;s:7:"level_3";b:1;s:7:"level_2";b:1;s:7:"level_1";b:1;s:7:"level_0";b:1;s:17:"edit_others_pages";b:1;s:20:"edit_published_pages";b:1;s:13:"publish_pages";b:1;s:12:"delete_pages";b:1;s:19:"delete_others_pages";b:1;s:22:"delete_published_pages";b:1;s:12:"delete_posts";b:1;s:19:"delete_others_posts";b:1;s:22:"delete_published_posts";b:1;s:20:"delete_private_posts";b:1;s:18:"edit_private_posts";b:1;s:18:"read_private_posts";b:1;s:20:"delete_private_pages";b:1;s:18:"edit_private_pages";b:1;s:18:"read_private_pages";b:1;}}s:6:"author";a:2:{s:4:"name";s:6:"Author";s:12:"capabilities";a:10:{s:12:"upload_files";b:1;s:10:"edit_posts";b:1;s:20:"edit_published_posts";b:1;s:13:"publish_posts";b:1;s:4:"read";b:1;s:7:"level_2";b:1;s:7:"level_1";b:1;s:7:"level_0";b:1;s:12:"delete_posts";b:1;s:22:"delete_published_posts";b:1;}}s:11:"contributor";a:2:{s:4:"name";s:11:"Contributor";s:12:"capabilities";a:5:{s:10:"edit_posts";b:1;s:4:"read";b:1;s:7:"level_1";b:1;s:7:"level_0";b:1;s:12:"delete_posts";b:1;}}s:10:"subscriber";a:2:{s:4:"name";s:10:"Subscriber";s:12:"capabilities";a:2:{s:4:"read";b:1;s:7:"level_0";b:1;}}}' where option_name = 'wp_user_roles';

Your mileage may vary. Don't do this at home if you don't know what you're doing, kids.

Now I'll have to go and do something about a serious itch on my left leg caused by having to read so much PHP code.


You learn something new every day. Today, I learnt some OpenOffice

Published: 2008-02-25 21:40 UTC. Tags: software

Today, I found out that different paragraph styles in OpenOffice can have different languages, which is the reason why the Tools->Options->Language->Languages setting flips back when you change it. You need to modify the language in the popup that appears when you press F11. Right-click on a paragraph style and choose Modify, then go to the Font tab. There it is!

This link is useful.


A contenttype template in ZopeSkel's localcommands

Published: 2007-12-02 22:14 UTC. Tags: software plone

As Tarek  noted the other day, I've added a template for injecting content types into an existing Archetype product, using Mustap's
localcommands support for ZopeSkel


Here's what the contenttype template will do for you after you've answered some questions:

  • Define a new Add Permission the product's
  • Define an interface for the content type you're creating.
  • Create a content type class, in a subdirectory named content/
  • Register this content type class in content/configure.zcml
  • Register the new content type in the factorytool, by adding to profiles/default/factorytool.xml
  • Register basic permissions for the new content types, by adding to profiles/default/rolemap.xml
  • Register the content type with portal_types by adding to profiles/default/types.xml
  • Add a new file with configuration information in profiles/default/types/.


Here's an example on how to use the template. You'll need mustap's branch of ZopeSkel for this to work:

Begin by creating a new product based on ZopeSkel's the Archetype template:

$ paster create -t archetype
Selected and implied templates:
  ZopeSkel#basic_namespace  A project with a namespace package
  ZopeSkel#plone            A Plone project
  ZopeSkel#archetype        A Plone project that uses Archetypes
Enter project name: efod.test
  egg:      efod.test
  package:  efodtest
  project:  efod.test
Enter title (The title of the project) ['Plone Example']: Contenttype Example
Enter namespace_package (Namespace package (like plone)) ['plone']: efod
Enter package (The package contained namespace package (like example)) ['example']: test
Enter zope2product (Are you creating a Zope 2 Product?) [False]: True
Enter version (Version) ['0.1']:
Enter description (One-line description of the package) ['']:
Enter long_description (Multi-line description (in reST)) ['']:
Enter author (Author name) ['Plone Foundation']:
Enter author_email (Author email) ['']:
Enter keywords (Space-separated keywords/tags) ['']:
Enter url (URL of homepage) ['']:
Enter license_name (License name) ['GPL']:
Enter zip_safe (True/False: if the package can be distributed as a .zip file) [False]:

Now go into the directory just created:

$ cd efod.test
$ paster addcontent contenttype
Enter contenttype_name (Content type name ) ['Example Type']:
Enter contenttype_description (Content type description ) ['Description of the Example Type']:
Enter folderish (True/False: Content type is Folderish ) [False]:
Enter global_allow (True/False: Globally addable ) [True]:
Enter allow_discussion (True/False: Allow discussion ) [False]:
  Inserting from config.py_insert into /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/
  Recursing into content
    Copying +content_class_filename+.py_tmpl to /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/content/
    Inserting from configure.zcml_insert into /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/content/configure.zcml
  Inserting from interfaces.py_insert into /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/
  Recursing into profiles
    Recursing into default
      Inserting from factorytool.xml_insert into /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/profiles/default/factorytool.xml
      Copying rolemap.xml_insert to /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/profiles/default/rolemap.xml
      Recursing into types
        Copying +types_xml_filename+.xml_tmpl to /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/profiles/default/types/Example_Type.xml
      Inserting from types.xml_insert into /home/forsberg/dev/plone/mustaptest/src/efod.test/efod/test/profiles/default/types.xml

That's it! After registering the archetype product in your buildout and installing it in Plone, you'll be able to add new 'Example Type' content.

Of course, for your custom content type to be really useful, you'll have to edit it, adding new fields and modifying templates.

Some Details

I am by no means a Plone guru, so some of the details in how this template constructs the new content type may be less than optimal. Please tell me if that's the case, and I'll see what I can

Base Classes

Depending on the answer to the "Folderish" question, the newly created content type will inherit either from Products.ATContentTypes.content.base, or from Products.ATContentTypes.content.folder.

Standard Fields

The template will set up AnnotationStorage and bridge properties for the standard title and description fields.


The Manager role is given the add permission for the content type (in profiles/Default/rolemap.xml).

zope2.View is required to view the content, and cmf.ModifyPortalContent to edit.


The standard views auto-generated by Archetypes are used as a default. Perhaps generating a custom view class, with template, would
be more Plone3? On the other hand there's already another template for creating a view available.

Please test!

Please test the template and tell me what you think! What should be made different, and what have I forgotten?


URl as UI - a bad example from the real world

Published: 2007-09-16 14:04 UTC. Tags: software world wide web

In todays issue of Dagens Nyheter, the largest morning paper in Sweden, an article about pensions caught my eye, not so much because of the subject, but because of the URLs they referred to in the article. Here are the three URLs they referred to:

The first one is OK from a user interface perspective, but the second and the third one made me chuckle, especially as the second one actually had a note which freely translated went something like "Note that four underline characters are needed for the direct link to work".

Clearly, the people who designed this web site have not read neither Jakob Nielsen's "URL as UI" nor W3C's "Cool URIs don't change". I guess they never thought about being referred to from a newspaper in print where people actually have to type their URLs into their web browser.

Dagens Nyheter tries to make the situation better by providing a link to their online version of the article at, which indeed makes it a bit easier for the readers to click the links. Sadly, the webmasters of Dagens Nyheter haven't done their homework either - the link is just a redirect to the article, which has this beautful URL: So, let's say that you saved a bookmark to the article, and want to refer a friend of yours by word of mouth a week later when you have forgotten that there was a human-friendly link available - now you must send the link via e-mail or other electronic media. Simply telling your friend over the phone is completely impossible, which it would not have been if the link were something like

Oh well.. the world is far from perfect. Happily, at work, we're using a content management system that automatically creates URLs that look nice: Plone.
3 comments. finally in production!

Published: 2007-08-25 22:27 UTC. Tags: open source software

For over a year, I've been working on replacing the bug tracker used by the python project. Earlier, they used the horrible bug tracker provided by sourceforge. Now, since last thursday, they are using their own tracker based on roundup.

Hooray! I'm very happy we're finally there!

The new tracker is at

Earlier posts on the subject:


PlacelessTranslationService bit my head off!

Published: 2007-08-09 16:34 UTC. Tags: open source software plone
Today, I made the mistake of adding a <myproductname>-sv.po file in the i18n directory of my product, which accidentally were marked with
Language: en

This of course made the english strings appear in swedish, but that's only to expect. What happened then was worse - after correcting the file to contain:

Language: sv

..there were no change! The english strings still appeared in swedish!

After a frustrated debug session, it turns out that the language property of the GettextMessageCatalog object stored in the ZODB (I think) is not reloaded when the Language property in the file is changed. So, even if you change the Language line in the .po file, the translation will still be marked as being in the language it first was entered as.

The solution? Move the .po-file out of the i18n directory, restart Zope, then move it back, and restart again.


Well, at least I can be glad that my CMS is an open source product, allowing me to debug problems properly.


Fighting tracker spam with SpamBayes

Published: 2007-07-28 00:43 UTC. Tags: open source software

During the last few days I've had time to do some programming just for fun. When you combine vacation and bad weather, you can be very productive :-).

Most of the time, I've been doing work for the roundup tracker instance for the python development team. I wrote about the new importer earlier, and now I've also created an anti-spam system based on SpamBayes.

For those interested, there's a technical description of the roundup spambayes integration in the roundup wiki.

SpamBayes seems to be a nice piece of software, especially now when it has an XMLRPC interface. Imagine having a SpamBayes XMLRPC server on your network, and then a plugin in your mail user agents that calls it to rate messages before sorting them down to folders, and a button in the user interface that allows users to report messages as being spam or legitimate content. That would be very powerful and give very few incorrect ratings as each organisation's SpamBayes server would learn what's legitimate content for the organization where it's installed.

When to sort?

As all users receiving large amounts of mail (in my case mostly because of mailing list subscriptions), I sort my mail. For my personal mail, I let the Cyrus IMAP server sort whenever the messages arrive, using the Sieve sorting language. I use the Sieve filter interface in Squirrelmail to create rules. This is very convenient as the mail is always sorted when I arrive at my mail client, regardless of which client I use. Also, I only have to define my filter rules at one place. I only wish there were more mail user agents with decent Sieve filter support.

For sorting out spam however, the later the sort is done, the better, at least for some kinds of anti-spam measures, including statistical filters such as SpamBayes. Imagine the following scenario:
  • User A, who likes to arrive at the office early in the morning, opens his INBOX and find 2 messages that have been incorrectly classified as legitimate mail. He presses the 'report as spam button', which in an ideal world will teach the local SpamBayes server to score the message better.
  • User B, who is a lazy bastard, arrives two hours later. When his mail user agent sorts his mail using the local SpamBayes server, he can benefit from the work made by User A earlier in the morning, as the two messages are now correctly sorted.
On the other hand, the statistical filters seems to work well enough for the typical spam that sorting early (on the server) is probably good enough, so I think I'll continue to let my Cyrus IMAP do the sorting for me. A button in my mail user agent for classifying the message would still be neat, though.

Life as a conversion script author

Published: 2007-07-25 13:05 UTC. Tags: open source software

About a year ago, the infrastructure team of the python language project sent out a call for trackers. They had come to the conclusion that the tracker available at sourceforge was not good enough. I can understand that - it's very hard to use, and since it's running on sourceforge's servers, it can't be customized.

I and several other people thought that roundup, a tracker infrastructure would be a good choice, so we formed a team and managed to come up with a submission for the call. This included writing a conversion script that took the data from sourceforge and imported it into  the new tracker. I created this script based on a screenscraper library for sourceforge written by Fredrik Lundh. This was importer #1.

Later on, roundup was selected as one of the two final alternatives. Happy happy, joy joy :-). A team was formed (including me) for creating the tracker, and Upfront Systems kindly provided a linux host for running the tracker.

Now began the real work of designing the tracker and adjusting the importer to the final schema. During this time, sourceforge managed to fix their broken xml export, so I wrote a new importer that instead of screenscraping webpages took an xml file as input which was much faster and more reliable. That is, I wrote importer  #2.

Later on, when we were beginning to get ready for production launch, a real showstopper shows up - the xml export from sourceforge couldn't cope with the size of the python project - the export was missing data.

After several months of waiting for sourceforge, they have a new export script that includes all data. Unfortunately, it has a completely new xml format. Writing a third importer was less than fun, but I managed to complete importer #3 yesterday. Hopefully, I didn't introduce that many bugs..

Who knows, maybe the python project will have a new tracker sometime this year? :-)

Try out the new tracker at


Plone 2.5.2 and LDAP - revisited

Published: 2007-03-04 18:23 UTC. Tags: software LDAP plone

One or two years ago, I spent some time trying to understand how to connect Plone 2.0 to LDAP. I really had no luck as things were complicated. Reading out existing users from the directory might have been possible, but trying to create users was a thing never heard of.

I decided to check out the current state of Plone and LDAP again with amore modern version of Plone, in my case, Plone 2.5.2. After some heavy experimentation, I've come to the conclusion that the software involved has grown more mature, but it's still hard to get it working.

Sources of Information

Software Requirements

  • python-ldap. Make sure the python that is used to run Zope has this module available, or nothing at all will work.
  • LDAPUserFolder. I used version 2.8beta.
  • LDAPMultiPlugins. I first tried version 1.4, but got some problems. Version 1.5, released yesterday(!), works much better.
  • The LDAPMultiPlugins patch available at For me, it applied cleanly on top of LDAPMultiPlugins 1.5. It adds functionality that is available and needed by PlonePAS. Group memberships seems to work much better with this patch than without.
  • Two patches, one on CMFPlone/ (download here), and one on PasswordResetTool/skins/PasswordReset/ (download here). Without these, registration will fail. Please note that both patches are ugly hacks that are not long-term solutions to the problem.
  • This patch:, or login after password reset will fail with a recursion depth error.


Drop LDAPUserFolder and LDAPMultiPlugins into your Products folder, apply patches listed above, and restart Zope.


Follow In short, you add a LDAP Multi Plugin to your PAS folder (acl_users in ZMI) by using the dropdown in the top right corner and then configure it.

Theory of Operation

Plone 2.5 uses PlonePAS, which is an adaption of Zope's PAS (the Pluggable Authentication System) for its user/group handling. That is a good thing, as PAS is a very flexible system that can do just about anything.

To get LDAP users/groups/authentication, LDAPMultiPlugins need to be installed and configured. After configuration, LDAPMultiPlugins contain an LDAPUserFolder that is used to actually fetch information from LDAP. The different plugins in LDAPMultiPlugins then add functionality such as authentication, user and group enumeration et. al. to PlonePAS.

Configuration of which LDAP server(s) to use, which base to use etc are made by visiting acl_users -> <your LDAPMultiPlugin> -> Contents -> acl_users. A bit awkward to find, if you ask me.


It's very important to pay attention to the LDAP Schema tab under the LDAPUserFolder.
  • The LDAP attribute used to keep the full name of the user must be mapped to fullname. In my case, this means that the LDAP attribute cn should be mapped to fullname. For other directory configurations the attribute may be named differently. Novell eDirectory for example, uses cn as username.
  • The LDAP attributes used to keep the e-mail address of the user must be mapped to email. In most cases this means that the LDAP attribute mail should be mapped to email.
  • Only attributes listed in the LDAP Schema tab are available in the dropdowns used to select which field to use as login name attribute, username etc in the configuration of LDAPUserFolder.
  • All attributes listed as MUST in the LDAP schemas used to create new users (and search for existing) must be listed under the LDAP Schema. If not, user registration will fail due to LDAP schema errors.
It's also very important to pay attention to the list of User Object Classes  in the configure tab. This list is used both to construct the query used when searching for user objects, and to create new user objects at registration. At new user registration, an LDAP object is first created with all attributes (except the RDN attribute) set to [unset] in the LDAP database. As mentioned above, all attributes listed under the LDAP Schemas tab are filled with this value. Later on in the registration codepath, the attributes actually mapped to plone attributes are set (one attribute at a time, in separate LDAP requests).

The order of the PAS Plugins is very important. To get user registration to work, and for other things as well, the LDAP Multi Plugin should be at top of the list of plugins for each type of plugin.

For (much) better performance, add caching by visiting the Caches tab of both ZMI->acl_users and your LDAP Multi Plugin. Adding a cache to source_groups also seems like a good idea (there's no cache tab, so you'll have to find the URL to the cache management yourself - it's something like http://uterus:8080/Plone/acl_users/source_groups/ZCacheable_manage. For me, it seems to work using the RAM Cache Manager that already exist in any Plone 2.5 installation.

That's all the things I can remember as being important from yesterday's late night session :-).


Access SugarCRM from Python via SOAP

Published: 2007-01-30 22:35 UTC. Tags: software sugarcrm soap

I spent part of the evening writing the embryo of a python module that will hopefully make it easy to access SugarCRM from Python via SOAP.

Right now, it only contains code for adding, updating, getting and deleting Accounts, but that could easily be extended. A bunch of unit tests too. 
Not very useful unless you're a developer. You need the Zolera Soap Infrastructure to get it to work.

Get the code from SVN:

svn co 

See also:


Published: 2007-01-22 19:45 UTC. Tags: software

Bought a book (Web Component Development with Zope 3) on A few days later, I got a mail from amazon with six recommendations for other books they thought I might like.

I already own two of them. Obviously I fit some kind of profile. Scary.


Yes, unit testing can be fun :-)

Published: 2007-01-15 21:49 UTC. Tags: software testing software engineering

Yesterday, I read one of the teaser chapters of Pragmatic Unit Testing in Java with JUnit.  It was a very interesting chapter, so I eventually ordered the book.

According to the reviews and from the looks of the table of contents, it doesn't talk too much about JUnit, but instead about general unit testing methodology in general which is what I want, as I don't do Java programming unless I can avoid it (another book on my bookshelf is btw Beyond Java, and although I think the author believes a bit much in XML as a must for the future, I feel that most of the conclusions in that book are something I can agree with - Java is on its way out, except as a niche language for "enterprise" applications. I don't know what "enterprise" is supposed to mean - "expensive and slow", perhaps?

Anyway, today at work, I had to rewrite a function that reads /etc/ldap.conf and parses out some LDAP server connection info. I had to read in more data, and change its API slightly as it was limited in the amount of data it could return. Inspired by the book, I started by writing some unit tests (using PyUnit) and then added more as I added features to the function.

Being able to run the tests while adding features, making sure that adjustments to parse and return a new type of data didn't break old data, made me fee happy and productive. Having to think about what kind of data should be returned and therefore tested also gave a better and more complete design.

In short - I'll be writing more unit tests from now on. I'm sure the book (when it arrives) will give me plenty of inspiration.


(Almost) perfect programming weather!

Published: 2007-01-14 13:42 UTC. Tags: software

The weather in LInköping is really cool right now. There's a major storm (weather service warns about 30m/s winds) and some rain.

Perfect weather for some programming. Currently working on the new tracker for python-dev (based on roundup)

Now, let's hope the power grid doesn't fail. About 120.000 people in Sweden have no electricity right now.

Running Wordpress on accounts

Published: 2007-01-14 12:56 UTC. Tags: software wordpress

My girlfriend has a hosting account at for her domain (number of .se-domains owned by the two people in this household: Three). Until now, she has been running static html pages generated by Dreamweaver, but a month or so ago, she decided she wanted to go for Wordpress.

As she has a background as a PC technician and a good understanding of how computers work in general, and also because Wordpress is really easy to install, she managed to install and get working with Wordpress without much help. It did not work 100%, though. The file upload dialog was not working, instead displaying an error message from Apache, and after many common administrative operatings, the same apache error message was displayed, and you had to manually navigate to a known URL to get back into the interface.

After some analysis using Wireshark, I came to the conclusion that for some reason, the Apache servers at didn't let PHP scripts set the Location header when trying to do redirects with HTTP response code 302 (temporary relocation). Weird.

The solution was to enable support for my-hacks.php (can be done via the administrative interface), and add a my-hacks.php to the root directory of the wordpress installation with the following contents:

$is_IIS = 1;

This causes wordpress to use an alternative strategy for doing redirects, which works better with the Apache servers at

The fact that has turned off the regular expression support in mod_rewrite did by the way not help when writing rules to make sure old links redirect to the new url scheme used by wordpress.

Why does Windows 2003 Server need the CD to remove things?

Published: 2006-12-07 15:00 UTC. Tags: software
Windows is full of surprises. Today I removed the DHCP server role from a Windows 2003 Server. The removal process asked me for the installation CDROM.



Installing Ubuntu on a machine with no CDROM drive

Published: 2006-11-29 21:35 UTC. Tags: software linux

Today I had to install Ubuntu on one of the older machines in the computer room. It's a 1U server without CDROM drive.

Ubuntu doesn't seem to ship any floppy images. It does ship a utility to boot from IDE CDROM drives in the case where the BIOS is too old or full of bugs preventing it to boot from the CDROM. You can do that by creating a floppy that has drivers for the CDROM and allows booting from the CD. To be specific, the image shipped with the CD contains Smartbootfloppy which has a webpage (sort of) at

However, since this machine had no CD at all, this didn't solve the problem.

My initial thought was to try using a CDROM drive connected to the USB port (at least the machine is new enough to have two USB ports). This however proved impossible due to the BIOS lacking functionality to recognize and boot from USB CDROM drives. And the boot manager from Ubuntu doesn't recognize USB drives either. Dead end.

I did some research on this, and found several places where people were asking how to do, but no places where people actually got good answers.

The solution to my problem was to create a boot floppy with etherboot. I went to and located my network card in the dropdown. I was lucky enough to know the exact PCI ID numbers of the network card which helped in finding the correct driver. If you don't know, try to locate the card by name in the list. Opt for a bootable floppy image, download it and then write it to a floppy per the instructions on the site.

You then need to configure a tftp server on another machine on the same network. The server should be configured to serve the contents of /install/netboot directory of the Ubuntu CD as root directory. This way, when the computer you are about to install asks for the file pxelinux.0, the pxelinux.0 in /install/netboot on the Ubuntu CD will be served.

I did this by installing the atftpd package, mounted my ubuntu CD on /media/cdrom and added /media/cdrom/install/netboot as commandline argument to in.tftpd in /etc/inetd.conf. Don't forget to restart inetd after doing this.

I then configured my DHCP server as follows:

host tobeinstalled {
        hardware ethernet 00:80:C8:F8:51:25;       
next-server myinstallhost;
        filename "pxelinux.0";
        allow bootp;
        allow booting;
This will make dhcpd tell the client that it should ask myinstallhost for pxelinux.0 at boot.

Inserted the floppy in the computer to be installed and rebooted it. It downloaded a bunch of files via TFTP, and then gave me the regular Ubuntu install prompt. Yay!


Why doesn't normal people find Open Source Software?

Published: 2006-11-07 21:01 UTC. Tags: open source software

I spoke to a customer today. He's head of the IT department for a middle-sized organization with 6000 users.

He had a project going on with the goal of finding and implementing a document management system for the whole organization.

The project group had reported back to him telling him that they had found three (yes, 3) candidates, and that the candidate they thought was best was some Microsoft product.

He was a little bit puzzled that they only had found three candidates. I had to agree, so I immediately made a quick search on Freshmeat for document management and got 105 hits.

I think this reflects a problem that the open source world has to pay more attention to - people don't know that there are alternatives to the products from Microsoft and the other big companies with large PR budgets.


First Impression - Samsung ML-2571N B/W Laser Printer

Published: 2006-10-18 20:32 UTC. Tags: software hardware review

Me and my girlfriend ordered a printer a few weeks ago. Today, it finally arrived.

This is a Black&White Laser printer with network capabilities.

Network Capability

Why network capabilities? Because it's reliable! I'm running a Linux workstation at home, my girlfriend has a Windows XP box. I don't trust her machine being up, serving the printer when I need it. She doesn't trust my machine being up, serving the printer when she needs it. She's probably right :-). This poor machine is a bit experimental.

So, to avoid any computer-related trouble, we bought a printer with network connectivuty that could easily be hooked up in our little apartment-network. The printer speaks 100Mbit/s ethernet and was configured to get an IP via DHCP at start. Excellent!

Size and Weight

It's a nice little machine. It's very lightweight, so there's no trouble keeping it on a shelf above the screen. It's also quite small, so there were no trouble finding a place to put it.


Very fast! It prints in no time!


Oh.. well.. it's a black&white printer we'll use for documents, and as far as I can tell, it prints good enough for that.

Linux Drivers

This is where the, ehm, "fun" begins. I do have quite a lot of experience in printing on Linux, especially with CUPS, so I do have some opinions on how Linux printers should behave and how to install them.

When it comes to this printer and its Linux drivers, I'm both impressed and quite unimpressed.

I'm impressed, because there is official Linux support, and not only for Red Hat Linux 7.3 or some other ancient distribution, but for all kinds of distributions.

I'm not impressed, because they have missed quite a few things, and the installation procedure is far from obvious and also full of bugs.

When you insert the CD which claims to contain Linux drivers, you are at first impressed that they ship a autorun file for Linux. Then you are less than impressed by the fact that they have missed the fact that many modern Linux distributions mount CD's noexec. This of course makes the installation fail ungracefully.

Fortunately, I'm experienced enough to understand this, so after remounting with exec, I started the installation program Linux/

This fires up a QT-based installation program that first tries to locate a locally connected printer. In my case, it didn't find any such printer since the printer is connected via the network and not via USB or parallell port. It then gives the opportunity to search for network printers. Being curious, I fired up a ethereal to see how it did that, and found out that it broadcasted for printers using, I think, SLP (Service Location Protocol). Clever use of standard protocols! It did find the print after a short while. Impressive!

But after this, I'm unimpressed again. The installation program starts installing stuff. Yeah, that's right. Stuff. It doesn't tell much about what it's installing, and the manual ain't clear on that either. Too much magic.

When the installation program ends, I have two processes running as root doing.. I don't know! The printer has not been added in CUPS. If I try to print from my web browser, I get a well-designed but malfunctioning interface featuring a picture with the printer. I don't know how it got there, and it doesn't work - it tells me the printer is not started.

Oh, my. As usual, hardware manufacturers try to make everything so seamless and smooth that the result is that nothing works.

Here's my recepy on how printer manufacturers should support printers:

  1. Provide packages for the major Linux distributions with the PPD's and any CUPS filters needed.
  2. Provide a installation program that either installs the packages, if there is a package for this distribution, or tell the user a number of specified files will now be installed at a bunch of specified locations.
  3. Let the installation program locate and install the printer in CUPS.
  4. That's all, folks.

I uninstalled the whole thing (there was, and I'm happy and impressed by this, an shipped on the CD), located the PPD on the CD and copied it into CUPS' ppd directory, then restarted CUPS and added the printer via the CUPS web interface. Works as a charm.



Published: 2006-10-17 21:14 UTC. Tags: open source software humor in-swedish

SuSE är en konstig Linuxdistribution. De älskar att göra allting annorlunda och gärna på ett sätt som ställer till det för oss som vet hur man gör på alla andra distributioner. Don't get me started när det gäller deras kreativa tolkning av hur PAM och NSS ska fungera.

Det har inte blivit bättre sedan de blev uppköpta av Novell...

Nu har jag kommit på varför den är knäpp. Det hörs ju på namnet!



Observations, 2006-09-22

Published: 2006-09-25 06:24 UTC. Tags: software hardware
  • The correct settings for talking to a Fujitsu-Siemens Primepower 250 (or a Sun VFire 250, or probably any sun) via the serial console port is 9600 8N1, no software handshake, no hardware handshake.
  • You know you're dealing with serious hardware when there's a setting for altitude in the system configuration. The Primepower 250 uses this setting to calculate how much the fans should rotate for a given system temperature.
  • Minicom does not handle UTF-8 xterm's or tty's very well.. LANG=C is recommended.

ZSI vs. SugarCRM, 1-0

Published: 2006-09-13 22:20 UTC. Tags: software sugarcrm soap

Yay! I've started to understand how to use the Zolera Soap Infrastructure to communicate with the SOAP interface of SugarCRM. Todays understatement is hereby delivered: It's not the easiest thing in the world. It doesn't help that most of the methods in the SOAP interface of SugarCRM are documented like this:


Well, they do tell which datatypes the expect as well, but not exactly how they should be filled.

Anyway, today I managed to create a meeting and connect it to the current user. Yay! Here's the code:

#!/usr/bin/env python

from sugarsoap_services import *
import md5

import sys

class LoginError(Exception): pass

def login(username, password):
    loc = sugarsoapLocator()

    portType = loc.getsugarsoapPortType()

    request = loginRequest()
    uauth = request.new_user_auth()
    request.User_auth = uauth

    uauth.User_name = username
    uauth.Password =
    uauth.Version = '1.1'

    response = portType.login(request)

    if -1 == response.Return.Id:
        raise LoginError(response.Return.Error)
    return (portType, response.Return.Id)

def add_meeting(portType, sessionid,
                date_start, time_start, name, duration_hours):

    gui_req = get_user_idRequest()
    gui_req.Session = sessionid
    user_id = portType.get_user_id(gui_req).Return

    print "user_id", user_id

    se_req = set_entryRequest()
    se_req.Session = sessionid
    se_req.Module_name = 'Meetings'

    se_req.Name_value_list = []
    for (n, v) in [('date_start', date_start),
                   ('time_start', time_start),
                   ('name', name),
                   ('duration_hours', duration_hours),
                   ('assigned_user_id', user_id)]:
        nvl = ns0.name_value_Def('name_value')
        nvl._name = n
        nvl._value = v

    se_resp = portType.set_entry(se_req)

    meeting_id = se_resp.Return.Id

    # Now let's associate this meeting with the current user, to make
    # it appear in this user's calendar

    sr_req = set_relationshipRequest()
    sr_req.Session = sessionid
    sr_req.Set_relationship_value = sr_req.new_set_relationship_value()
    sr_req.Set_relationship_value.Module1 = 'Meetings'
    sr_req.Set_relationship_value.Module1_id = meeting_id
    sr_req.Set_relationship_value.Module2 = 'Users'
    sr_req.Set_relationship_value.Module2_id = user_id

    sr_resp = portType.set_relationship(sr_req)

    return sr_resp

if "__main__" == __name__:
    (portType, sessionid) = login('username', 'password')
    response = add_meeting(portType, sessionid, '2006-09-14',
                           '15:00:00', 'Soap Meeting', 1)

Piece of cake, huh? :-)


Observations 2006-09-12

Published: 2006-09-12 20:58 UTC. Tags: software

Some notes from todays work...

  • Don't interrupt a running yum update with kill -9, that messes up things. Had to reinstall some, and remove some, packages, upgrading a machine from CentOS 4.3 to 4.4. Oh well..

    Btw, the error message from yum when it can't find any mirrors due to the nameserver being completely down is.. confusing! I don't have it here, though.

    YUM delenda est.

  • Playing around with SOAP, talking to SugarCRM, got much easier after reading the "guide.pdf", which is unfortunately only available in the mail archive (and I can't find a web archive with the pdf available to link to). Hopefully, it'll be added to the pywebsvcs web pages soon.

I'm thinking about writing a SugarCRM plugin for opensync. That would be cool. And useful. The code for moto-sync, in Python, is very readable and could serve as an example.


Observations 2006-09-11

Published: 2006-09-11 21:46 UTC. Tags: software

Some notes from todays work..

  • eDirectory is actually smart enough to keep track of alias objects. If you move or delete an object, the alias is updated/removed.
  • It's not at all recommended to try major upgrades of a remote Linux machine when the network goes up, down, up, down, up, down. Bah!
  • SOAP, which is an acronym for Simple Object Access Protocol, is of course, as many other protocols which has a name with Simple, not simple at all.
  • The SOAP implementation in SugarCRM has (at least one) bug. It doesn't send along a non-empty error field in the response to get_module_fields(). See this bug for a patch.

Getting Gnus Archive Messages on IMAP

Published: 2006-09-08 20:41 UTC. Tags: software

Now and then, I've made half an attempt to make my Gnus archive messages in an IMAP box, but so far, I've failed with various spectacular error messages.

Today I managed to get just the right keywords in my google query, and ended up at this blog post, which gave the answer:

(setq gnus-message-archive-group "nnimap+<select-method-name>:INBOX.Sent")

It seems to work, even after a restart of emacs. The real test is to see if it works tomorrow too! :-)

The funny part is that the blog post actually refers to my page on Gnus+nnimap+Courier IMAP+SSL. It's always funny when you're referred back to your own pages when looking for information about something you need help with :-).


Autofs and LDAP

Published: 2006-06-27 18:55 UTC. Tags: software

Today I began working on replacing the NIS installation at work with an LDAP database.

As I've used the PAM and NSS LDAP modules a lot at customer sites integrating against eDirectory, I was rather comfortable with that part of the integration. What I didn't know much about was how the automounter integrates with LDAP.

It turns out that this was rather easy, although the documentation is sparse. Also, the fact that I have to cope with the rather old autofs in Red Hat Linux 7.3 complicated the installation a bit (I have one machine on the network that must run RHL 7.3. The rest of the machines are running a variety of modern Linux distributions)

Client Configuration

Instead of having to manually specify which mount points to automount in /etc/auto.master on each client, all configuration is stored in LDAP. To instruct autofs to read LDAP to find automountpoints, add ldap to the automount line in /etc/nsswitch.conf. In my case, the line looks like this:

automount: files ldap

This instructs automount to first check /etc/auto.master for mount points, and then search LDAP.

Which LDAP Server is Used?

Autofs has to know which LDAP server to use. It seems the method of aquiring this information is a bit different on different distributions. The Red Hat Linux machine read /etc/ldap.conf which is the configuration file for nss_ldap and pam_ldap, while my Debian sarge workstation reads /etc/ldap/ldap.conf which is configuration for the OpenLDAP libraries. I have yet to see which file is used by Fedora Core et. al.

It also seems like there's some support for finding which LDAP server via DNS RR records, but I haven't investigated this further.

Old Autofs - LDAP v2-style bind, but with LDAP v3?

One problem on the Red Hat Linux 7.3 machine was that it was trying to do a LDAP version 2-style bind over LDAP version 3. It was trying to bind as the DN of the ou for an automount map (more about this later), with a null password. My OpenLDAP didn't like this, expecting either a regular bind with DN and password, or an anonymous bind with null DN and null password.

I solved this problem by patching autofs. I could probably have upgraded autofs as well, but I'm not sure autofs 4 works with the current kernel version, so patching was easier. I still have to include the allow bind_v2 in the server configuration.

Data in LDAP

To find which mount points to handle, autofs will search LDAP for entries with the objectclas automountMap. It will then search for all entries under this DN with the objectclass automount, each of them representing a mount point for the automounter to handle.

Each of the automount entries under the automountMap entry points to another container in the LDAP tree, under which you store one automount entry per possible subdirectory to the mount point.

Confused? Let's show some example:

The automountMap and its subtree looks like this:

dn: ou=auto.master,ou=autofs,dc=example,dc=com
ou: auto.master
objectClass: top
objectClass: automountMap

dn: cn=/import,ou=auto.master,ou=autofs,dc=example,dc=com
objectClass: automount
cn: /import

dn: cn=/home,ou=auto.master,ou=autofs,dc=example,dc=com
objectClass: automount
cn: /home

This tells the automounter that it should handle /home, and that information about which directories are available to mount under /home is available on the LDAP server under the DN ou=auto.home,ou=autofs,dc=example,dc=com.

There's similar information for our second automount point, /import

Now, let's inspect ou=auto.home,ou=autofs,dc=example,dc=com

dn: ou=auto.home,ou=autofs,dc=cendio,dc=se
ou: auto.home
objectClass: top
objectClass: organizationalUnit

dn: cn=wingel,ou=auto.home,ou=autofs,dc=cendio,dc=se
cn: wingel
objectClass: automount
automountInformation: -rsize=8192,wsize=8192,intr fileserver:/export/home/wingel
dn: cn=thomas,ou=auto.home,ou=autofs,dc=cendio,dc=se
cn: thomas
objectClass: automount
automountInformation: -rsize=8192,wsize=8192,intr fileserver:/export/home/thomas
dn: cn=forsberg,ou=auto.home,ou=autofs,dc=cendio,dc=se
cn: forsberg
objectClass: automount
automountInformation:  -rsize=8192,wsize=8192,intr fileserver:/export/home/thomas

As you can see, under the auto.home ou, there's one entry for each possible mount under /home. The automountInformation attribute contains the information the automounter uses to do the actual mount.


RSH - an unpleasant guest from the past

Published: 2006-03-04 13:25 UTC. Tags: software sugarcrm

Yesterday I spent some time evaluating the Open Source version of SugarCRM, since the salespeople at work wants to see if a CRM can help them.

I installed an extension called ZuckerMail to allow reading of mail stored on our IMAP server, and was irritated by the long time it took to get the list of mail, or to read a single mail.

I did some strace:ing, but that didn't help very much - it only revealed the fact that something was waiting for 10 seconds. A process listing however revealed that there was several rsh procesess trying to contact the IMAP server. Also, my firewall's log had blocked several connection attempts from this machine to the IMAP server on port 22 (ssh).

Aha.. but why!

It turns out that the IMAP library for PHP is built on top of the ancient c-client library from the UW IMAP. If you're trying to make an connection without explicitly telling it that you want to connect with ssl (something the GUI for configuring the connection did not have an option for - only STARTTLS, which is another thing), the c-client library tries to run rimapd on the host via rsh. There is a /usr/bin/rsh on the machine, linked to /usr/bin/ssh via /etc/alternatives (Debian sarge).

You can turn this behaviour off in a configuration file for c-client. I solved the problem a bit more drastically - by removing /usr/bin/rsh. This made all IMAP operations go much faster.


'locate' on SuSE-based Distributions

Published: 2006-01-30 13:01 UTC. Tags: software linux

One of the things that always get me irritated when I have to work with a SuSE-based Linux distribution is the missing locate command.

For future reference, it's available in the findutils-locate package, so the next time I forget the name of the package, I can search my blog :-).