I try to keep the same .zshrc dotfile on all the machines I use. I store the “master” copy in SVN, so any improvements I make can easily go to other computers.
However, some parts of zshrc configuration only make sense for particular machine. To keep one common master copy but also allow for some “local” configuration, you can simply use an include in your .zshrc like this:

[[ -r ~/.zsh/local.zsh ]] && . ~/.zsh/local.zsh

solr can be easily extended to handle binary files and extract the information from them. Apache Tika library is used for the file analysis.
This can be set up in solr by using Extracting Request Handler that is already set up in solrconfig.xml. All we need to do is to add extra libraries. Once you have your solr set up, copy:

  • contrib/extraction/lib/* (from the downloaded solr package) into /var/lib/tomcat6/webapps/solr/WEB-INF/lib
  • solr/apache-solr-4.0.0-BETA/dist/apache-solr-cell-4.0.0-BETA.jar into /var/lib/tomcat6/webapps/solr/WEB-INF/lib

Restart tomcat and index sample document (I’m using test.pdf that I have in the same, current directory). Handler is available at update/extract:

curl "http://localhost:8080/solr/update/extract?literal.id=doc1&commit=true" -F "myfile=@test.pdf"

I have provided an unique id for the document by passing literal.id=doc1 option, to index second document I’d use:

curl "http://localhost:8080/solr/update/extract?literal.id=doc2&commit=true" -F "myfile=@test.pdf"

That’s all – here is the result of executing “*” query:

<?xml version="1.0" encoding="UTF-8"?>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">4</int><lst name="params"><str name="q">*</str><str name="wt">xml</str></lst></lst><result name="response" numFound="2" start="0"><doc><str name="id">doc1</str>
<str name="author">Tomasz Muras</str>
<str name="author_s">Tomasz Muras</str><arr name="content_type">
<arr name="content">
<str>Test Test Test</str></arr>
<long name="_version_">1413927282391646208</long></doc>
<doc><str name="id">doc2</str>
<str name="author">Tomasz Muras</str>
<str name="author_s">Tomasz Muras</str>
<arr name="content_type"><str>application/pdf</str></arr>
<arr name="content"><str>Test Test Test</str></arr>
<long name="_version_">1413934423130243072</long></doc>

I had a problem when I created document/folder name in Alfresco Share that included Polish characters, e.g., ą, ę, ł, ż, ź. The Polish characters at first were rendered correctly but in the database (MySQL) they were saved wrong, which means that ? character was put instead of Polish letters. To make it more confusing the names were rendered wrong not necessary straight away but for example after server restart. Such behaviour implies that there had to be a communication issue between Alfresco and database and the data were saved using wrong encoding in the database. Alfresco application had file names saved in cache so it did not use the database to obtain their names. When server was restarted the data were read from the database and ? were shown. Unfortunately having ? characters caused problems to read for example folder content.

The solution to this issue was to change default encoding in the database to UTF-8. In MySQL it can be done in configuration file (/etc/mysql/my.cnf on Ubuntu). The following lines should be added to the appropriate sections in configuration file.





After logging to MySQL and running status command the following information regarding encoding should be shown:

Server characterset:    utf8
Db     characterset:   utf8
Client characterset:   utf8
Conn.  characterset:    utf8

Update on 2012.09.20: updated for Solr 4.0-BETA (from ALPHA, thanks for the comment Dorthe).
Update on 2013.07.09: updated for Solr 4.3.1
Update on 2013.07.28: the guide works with Solr 4.4, Ubuntu server 13.04 and tomcat7, just replace tomcat6 with tomcat7 and /var/lib/tomcat6/shared with /var/lib/tomcat7/lib

This short guide will describe how to install solr 4 on Ubuntu server. The versions I’m using are: Ubuntu Server 12.04 and Apache Solr 4.3.1. I will also show how to test the installation & perform a sample indexing and query.

Installation on tomcat Ubuntu 12.04 LTS

1. Install packages

 apt-get install tomcat6 curl

2. Download solr 4 from http://lucene.apache.org/solr (at the time of writing it was solr-4.3.1.tgz)
3. Choose directory for solr – know as SOLR_HOME. I’ve chosen to put it into /opt/solr so my SOLR_HOME=/opt/solr. Replace /opt/solr with your own location if you want a different one.
4. Extract downloaded files anywhere and copy following to your $SOLR_HOME and to your tomcat installation:

  • copy example/solr/* to /opt/solr
  • copy example/webapps/solr.war to /opt/solr
  • copy example/lib/ext/* to /var/lib/tomcat6/shared

5. Edit dataDir in solr configuration file /opt/solr/collection1/conf/solrconfig.xml:


6. Create directory for solr data and make it write-able for tomcat server

% mkdir /opt/solr/data
% sudo chown tomcat6 /opt/solr/data

Here is how my /opt/solr directory looks like (showing only directories):

$ tree -d
├── bin
├── collection1
│   └── conf
│       ├── lang
│       ├── velocity
│       └── xslt
└── data

7. Setup new context in tomcat server pointing to our solr files. Create file /etc/tomcat6/Catalina/localhost/solr.xml with the following content:

<?xml version="1.0" encoding="utf-8"?>
<Context docBase="/opt/solr/solr.war" debug="0" crossContext="true">
  <Environment name="solr/home" type="java.lang.String" value="/opt/solr" override="true"/>

8. Restart tomcat

/etc/init.d/tomcat6 restart

9. Enjoy your newly set up solr4 by pointing your browser to http://localhost:8080/solr.

solr dashboard

solr dashboard

Sample indexing and UTF-8 test

solr installation files come with sample schema.xml (we’ve already copied it into our $SOLR_HOME) and some .xml files with sample data we can import. We will use one of them to test if UTF-8 encoding is working as expected.
1. Go to the directory with extracted solr installation files and import utf8-example.xml using curl

curl $URL --data-binary @example/exampledocs/utf8-example.xml -H 'Content-type:application/xml'

The response from the server should be similar to

<?xml version="1.0" encoding="UTF-8"?>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">22</int></lst>

2. Commit the documents

curl "$URL?softCommit=true"

3. Test it by searching for êâîôû string. Use solr administrative UI or this GET request should do: http://localhost:8080/solr/collection1/select?q=êâîôû. You should see exactly one result.


This post describes how to debug JavaScript in Alfresco/Share.

There are two types of js files used in Alfresco/Share:

  • client side – they are placed in Share root directory
  • server side – they are placed in the path within WEB-INF/alfresco directory in Share and Alfresco and are used for example for by web scripts

Client side

Share Debugger

To debug JavaScript on client side client-debug and client-debug-autologging flags in Share configuration file share/WEB-INF/classes/alfresco/share-config.xml can be set to true as presented below. That allows to use JavaScript debugger after pressing (Ctrl, Ctrl, Shift, Shift). Setting client-debug to true causes using original *.js files instead of their minimised versions *-min.js. Setting client-debug-autologging to true enables the JavaScript debugger console.

            Developer debugging setting to turn on DEBUG mode for client scripts in the browser
            LOGGING can always be toggled at runtime when in DEBUG mode (Ctrl, Ctrl, Shift, Shift).
            This flag automatically activates logging on page load.

Web Browser Debugger

Apart from that standard tools provided by web browsers can be used. They are really great and include:

  • Web Console (Tools -> Web Developer) in Firefox
  • Developer Tools (Tools) in Chrome

Server side

Log file

It is not so straight forward to debug server side script in Alfresco. Therefore there is logging class that saves the logging messages from JavaScript to standard log files/output. To see those change logging level for org.alfresco.repo.jscript.ScriptLogger class to DEBUG. Corresponding line of WEB-INF/classes/log4j.properties file is presented below:


Then you can use the following command in your JavaScript to log the messages:

 logger.log("Log me");

Alfresco/Share Debuger

You can also activate server side JavasScript debugger to assist your development. To do so use the following links and enable debugger there:

  • Share: share/service/api/javascript/debugger
  • Alfresco: alfresco/service/api/javascript/debugger

Make sure that that the following lines is set to “on” in WEB-INF/classes/log4j.properties


Normally authentication is handled by Symfony nearly automatically – you just need to define and configure your firewalls. Sometimes, however you may want to perform authentication manually from the controller.
Imagine implementing automated login for a user upon visiting a URL like: /autologin/{secret}. I am not considering here the security of such a solution – you are discourage to do it this way, unless the information available for this kind of “logins” is not confidential.

Here is a fragment from my security.yml:

            pattern:    ^/
                check_path: /login_check
                login_path: /login

The actual authentication is very straight-forward. Since I’m redirecting at the end of request, I don’t even need the user to be authenticated in this action. All that is needed is to persist the information about authenticated user to the session. This means storing serialized class that implements TokenInterface. Normally this is done by Symfony framework in ContextListener. In my scenario I’m using form login that uses UsernamePasswordToken, so in short here is what I need to do:

  • Find user
  • Create the Token
  • Store Token in the session

Pay attention to “secured_area” string – it matches the firewall name from the security.yml and is used to create the token and when creating a session key.

     * @Route("/autologin/{secret}")
    public function autologinAction($secret) {
        $em = $this->getDoctrine()->getEntityManager();
        $repository = $em->getRepository('MiedzywodzieClientBundle:Reservation');
        $result = $repository->matchLoginKey($secret);
        if (!$result) {
            return $this->render('MiedzywodzieClientBundle:Default:autologin_incorrect.html.twig');
        $result = $result[0]; 
        $token = new UsernamePasswordToken($result, $result->getPassword(), 'secured_area', $result->getRoles());
        $request = $this->getRequest();
        $session = $request->getSession();
        $session->set('_security_secured_area',  serialize($token));
        $router = $this->get('router');
        $url = $router->generate('miedzywodzie_client_default_dashboard');
        return $this->redirect($url);


The purpose of this post is to present creation of new workflow that would copy attached file to selected location depending whether the document was approved or rejected. In addition, I explain in more detail wokflow console and show how to gather more information regarding workflows from it.

Creation of workflow and gathering information from workflow console

Let’s create simple workflow ‘Review and Approve’. The workflow has one document attached. The screen shot with initial worflow settings is presented below.

Run workflow console by running URL presented below. In this post all the URLs start with ‘http://localhost:8080/alfresco’ where it is path to your Alfresco deployment.


In workflow console run the command to show all the workflows.

show workflows all

You get the following information:

id: activiti$4265 , desc: Please review , start date: Tue May 15 20:18:07 IST 2012 , def: activiti$activitiReview v1

Let’s see more details about the workflow we have just started. As we can see in previous listing the id of the workflow is ‘activiti$4265′.

desc workflow activiti$4265

The outcome of the command is presented below. Note that under information about a package we have node reference.

definition: activiti$activitiReview
id: activiti$4265
description: Please review
active: true
start date: Tue May 15 20:18:07 IST 2012
end date: null
initiator: workspace://SpacesStore/08b80f86-1db3-44ed-b71a-02ebe4e932aa
context: null
package: workspace://SpacesStore/8d33211a-9f65-42f8-836e-54e2e445d140

Let’s run the node browser and check the node reference from package (workspace://SpacesStore/8d33211a-9f65-42f8-836e-54e2e445d140).


The relevant information about the node are presented below. As we can see the reference node is container for all the documents attached to the workflow. In our case it contains the file ‘mikolajek.jpg’ attached on workflow creation. This information is going to be useful when we have to find nodes to be copied.


Child Name          Child Node                                                  Primary Association Type                                    Index
mikolajek.jpg           workspace://SpacesStore/5351a554-3913-433f-8919-022d6dead7ce    false   {http://www.alfresco.org/model/bpm/1.0}packageContains  -1

Creation of new workflow

This section describes how to create new workflow that depending on whether task was approved or rejected is going to add appropriate aspect to all the files attached to the workflow. Let’s call the aspect ‘workflowOutcomeAspect’ and allow it to have two values: ‘approved’ or ‘rejected’. The definition of new aspect is presented below.

 <constraint name="wf:allowedOutcome" type="LIST">
        <parameter name="allowedValues">

Following that let’s modify the initial workflow (‘Review and Approve’) to add ‘workflowOutcomeAspect’ to all the child nodes of package node and set property ‘workflowOutcome’ of that aspect to ‘approved’ or ‘rejected’ depending on user action. To note, ‘Review and Approve’ workflow is one of the standard workflows available with Alfresco deployment. The package is available in JavaScript under ‘bpm_package’ variable and its children can be obtained by invocation of ‘bpm_package.children’. More information about creation and management of workflows can be found in my post Creation of workflow in Alfresco using Activiti step by step.

<aspect name="wf:workflowOutcomeAspect">
            <title>Workflow Outcome</title>
                <property name="wf:workflowOutcome">
                    <title>Workflow Outcome</title>
                        <constraint ref="wf:allowedOutcome" />

Creation of rule to copy the documents

On workflow approval or rejection the aspect variable ‘workflowOutcome’ will be set to appropriate value. In Alfresco Explorer or Share let’s create the rule that would check whether some documents in particular folder have ‘workflowOutcome’ set and depending on its value copy the documents to selected folder. Select ‘copy’ action as a rule. The rule summary is presented below. In fact, I have created two rules – one to copy approved documents and one to copy rejected ones.

Rule summary

Rule Type:  update
Name:   Approved documents
Apply rule to sub spaces:   No
Run rule in background: Yes
Disable rule:   No
Conditions: Text Property 'wf:workflowOutcome' Equals To 'approved'
Actions:    Move to 'approved'

Rule Type:  update
Name:   Rejected documents
Apply rule to sub spaces:   No
Run rule in background: Yes
Disable rule:   No
Conditions: Text Property 'wf:workflowOutcome' Equals To 'rejected'
Actions:    Move to 'rejected'

I hope that you have enjoyed the post and find it useful


This article describes few useful bits and pieces about running Apache Tomcat.

Setup of Tomcat environment variables – setenv.sh

As stated in CATALINA_BASE/bin/catalina.sh file the following environment variables can be set in CATALINA_BASE/bin/setenv.sh . setenv.sh script is run on Tomcat startup. It is not present in standard Tomcat distribution, so has to be created.

  • CATALINA_HOME May point at your Catalina “build” directory.
  • CATALINA_BASE (Optional) Base directory for resolving dynamic portions of a Catalina installation. If not present, resolves to the same directory that CATALINA_HOME points to.
  • CATALINA_OUT (Optional) Full path to a file where stdout and stderr will be redirected. Default is $CATALINA_BASE/logs/catalina.out
  • CATALINA_OPTS (Optional) Java runtime options used when the “start”, “run” or “debug” command is executed. Include here and not in JAVA_OPTS all options, that should only be used by Tomcat itself, not by the stop process, the version command etc. Examples are heap size, GC logging, JMX ports etc.
  • CATALINA_TMPDIR (Optional) Directory path location of temporary directory the JVM should use (java.io.tmpdir). Defaults to $CATALINA_BASE/temp.
  • JAVA_HOME Must point at your Java Development Kit installation. Required to run the with the “debug” argument.
  • JRE_HOME Must point at your Java Runtime installation.Defaults to JAVA_HOME if empty. If JRE_HOME and JAVA_HOME are both set, JRE_HOME is used.
  • JAVA_OPTS (Optional) Java runtime options used when any command is executed.Include here and not in CATALINA_OPTS all options, that should be used by Tomcat and also by the stop process, the version command etc. Most options should go into CATALINA_OPTS.
  • JAVA_ENDORSED_DIRS (Optional) Lists of of colon separated directories containing some jars in order to allow replacement of APIs created outside of the JCP (i.e. DOM and SAX from W3C). It can also be used to update the XML parser implementation. Defaults to $CATALINA_HOME/endorsed.
  • JPDA_TRANSPORT (Optional) JPDA transport used when the “jpda start” command is executed. The default is “dt_socket”.
  • JPDA_ADDRESS (Optional) Java runtime options used when the “jpda start” command is executed. The default is 8000.
  • JPDA_SUSPEND (Optional) Java runtime options used when the “jpda start” command is executed. Specifies whether JVM should suspend execution immediately after startup. Default is “n”.
  • JPDA_OPTS (Optional) Java runtime options used when the “jpda start” command is executed. If used, JPDA_TRANSPORT, JPDA_ADDRESS, and JPDA_SUSPEND are ignored. Thus, all required jpda options MUST be specified. The default is:
  • CATALINA_PID (Optional) Path of the file which should contains the pid of the catalina startup java process, when start (fork) is used
  • LOGGING_CONFIG (Optional) Override Tomcat’s logging config file Example (all one line) LOGGING_CONFIG=”-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties
  • LOGGING_MANAGER (Optional) Override Tomcat’s logging managerExample (all one line)
  • LOGGING_MANAGER=-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager”

In case you need more memory to run your Tomcat instance just put the following line in setenv.sh file.

export JAVA_OPTS="-XX:MaxPermSize=1024m -Xms512m -Xmx4096m"

Running Tomcat – catalina.sh start|run|debug|jpda start

To run Tomcat you can use catalina.sh script with different options:

To test a performance of multiple parallel file downloads, I had to make sure that a download takes significant amount of time. I could use huge files but that’s not very helpful if you work on a local, 1Gb LAN. So I’ve decided to limit download speeds from my Apache server to my PC. Here we go.

1. Mark packages to be throttled, in my case those originating from port 80

$ iptables -A OUTPUT -p tcp --sport 80 -j MARK --set-mark 100

2. Use tc utility to limit traffic for the packages marked as above (handle 100):

$ tc qdisc add dev eth0 root handle 1:0 htb default 10
$ tc class add dev eth0 parent 1:0 classid 1:10 htb rate 1024kbps ceil 2048kbps prio 0
$ tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 100 fw flowid 1:10

3. That’s it, you can monitor/check your rules with:

$ tc filter show dev eth0
$ tc -s -d class show dev eth0

and finally remove the throttling with:

$ tc qdisc del dev eth0 root
$ iptables -D OUTPUT -p tcp --sport 80 -j MARK --set-mark 100

Some time ago I had to porcess a lot of images in a simple way – remove the top and bottom part of them. It was not a task I could automate – the amount of image I had to cut from the top & bottom varied for each photo. To make the mundane work a bit easier, I’ve created a script – python plugin.

The script assumes you have put two guide lines onto the image. It finds them, cuts the image from between them and saves as a new file.

To create such a simple script in python you need to:

  • import gimpfu
  • run register method that tells gimp (among other things), a function name that implements the script (special_crop) and where to put a link to the script in gimp menu (<Image>/Filters)
  • implement your function
  • copy script to your custom scripts folder (e.g. /home/…/.gimp-2.6/plug-ins)

The other locations you could use when choosing where in menu system a script should appear are:
“<Toolbox>”, “<Image>”, “<Layers>”, “<Channels>”, “<Vectors>”, “<Colormap>”, “<Load>”, “<Save>”, “<Brushes>”, “<Gradients>”, “<Palettes>”, “<Patterns>” or “<Buffers>”

And finally, the script itself. It’s fairly self-explanatory – enjoy and happy gimping!

#!/usr/bin/env python
from gimpfu import *
def special_crop(image):
        print "Start"
        pdb = gimp.pdb
        top = pdb.gimp_image_find_next_guide(image, 0)
        top_y = pdb.gimp_image_get_guide_position(image,top)
        bottom = pdb.gimp_image_find_next_guide(image, top)
        bottom_y = pdb.gimp_image_get_guide_position(image,bottom)
        if top_y > bottom_y:
                temp_y = top_y
                top_y = bottom_y
                bottom_y = temp_y
        print "Cutting from", top_y,"to",bottom_y
        pdb.gimp_rect_select(image, 0, top_y, image.width, bottom_y-top_y, CHANNEL_OP_REPLACE, FALSE, 0)
        image2 = pdb.gimp_edit_paste_as_new()
        new_filename = image.filename[0:-4]+"_cut.jpg"
        pdb.file_jpeg_save(image2, image2.active_drawable, new_filename, "raw_filename", 0.9, 0.5, 0, 0, "New file", 0, 0, 0, 0)
    "Crop an image",
    "Crops the image.",
    "Tomasz Muras",
    "Tomasz Muras",
    "Special crop",
        (PF_IMAGE, "image","Input image", None),