The purpose of this post is to present creation of new workflow that would copy attached file to selected location depending whether the document was approved or rejected. In addition, I explain in more detail wokflow console and show how to gather more information regarding workflows from it.
Creation of workflow and gathering information from workflow console
Let’s create simple workflow ‘Review and Approve’. The workflow has one document attached. The screen shot with initial worflow settings is presented below.
Run workflow console by running URL presented below. In this post all the URLs start with ‘http://localhost:8080/alfresco’ where it is path to your Alfresco deployment.
The relevant information about the node are presented below. As we can see the reference node is container for all the documents attached to the workflow. In our case it contains the file ‘mikolajek.jpg’ attached on workflow creation. This information is going to be useful when we have to find nodes to be copied.
Children
Child Name Child Node Primary Association Type Index
mikolajek.jpg workspace://SpacesStore/5351a554-3913-433f-8919-022d6dead7ce false {http://www.alfresco.org/model/bpm/1.0}packageContains -1
Creation of new workflow
This section describes how to create new workflow that depending on whether task was approved or rejected is going to add appropriate aspect to all the files attached to the workflow. Let’s call the aspect ‘workflowOutcomeAspect’ and allow it to have two values: ‘approved’ or ‘rejected’. The definition of new aspect is presented below.
Following that let’s modify the initial workflow (‘Review and Approve’) to add ‘workflowOutcomeAspect’ to all the child nodes of package node and set property ‘workflowOutcome’ of that aspect to ‘approved’ or ‘rejected’ depending on user action. To note, ‘Review and Approve’ workflow is one of the standard workflows available with Alfresco deployment. The package is available in JavaScript under ‘bpm_package’ variable and its children can be obtained by invocation of ‘bpm_package.children’. More information about creation and management of workflows can be found in my post Creation of workflow in Alfresco using Activiti step by step.
Creation of rule to copy the documents
On workflow approval or rejection the aspect variable ‘workflowOutcome’ will be set to appropriate value. In Alfresco Explorer or Share let’s create the rule that would check whether some documents in particular folder have ‘workflowOutcome’ set and depending on its value copy the documents to selected folder. Select ‘copy’ action as a rule. The rule summary is presented below. In fact, I have created two rules – one to copy approved documents and one to copy rejected ones.
Rule summary
Rule Type: update
Name: Approved documents
Description:
Apply rule to sub spaces: No
Run rule in background: Yes
Disable rule: No
Conditions: Text Property 'wf:workflowOutcome' Equals To 'approved'
Actions: Move to 'approved'
Rule Type: update
Name: Rejected documents
Description:
Apply rule to sub spaces: No
Run rule in background: Yes
Disable rule: No
Conditions: Text Property 'wf:workflowOutcome' Equals To 'rejected'
Actions: Move to 'rejected'
I hope that you have enjoyed the post and find it useful
This article describes few useful bits and pieces about running Apache Tomcat.
Setup of Tomcat environment variables – setenv.sh
As stated in CATALINA_BASE/bin/catalina.sh file the following environment variables can be set in CATALINA_BASE/bin/setenv.sh . setenv.sh script is run on Tomcat startup. It is not present in standard Tomcat distribution, so has to be created.
CATALINA_HOME May point at your Catalina “build” directory.
CATALINA_BASE (Optional) Base directory for resolving dynamic portions of a Catalina installation. If not present, resolves to the same directory that CATALINA_HOME points to.
CATALINA_OUT (Optional) Full path to a file where stdout and stderr will be redirected. Default is $CATALINA_BASE/logs/catalina.out
CATALINA_OPTS (Optional) Java runtime options used when the “start”, “run” or “debug” command is executed. Include here and not in JAVA_OPTS all options, that should only be used by Tomcat itself, not by the stop process, the version command etc. Examples are heap size, GC logging, JMX ports etc.
CATALINA_TMPDIR (Optional) Directory path location of temporary directory the JVM should use (java.io.tmpdir). Defaults to $CATALINA_BASE/temp.
JAVA_HOME Must point at your Java Development Kit installation. Required to run the with the “debug” argument.
JRE_HOME Must point at your Java Runtime installation.Defaults to JAVA_HOME if empty. If JRE_HOME and JAVA_HOME are both set, JRE_HOME is used.
JAVA_OPTS (Optional) Java runtime options used when any command is executed.Include here and not in CATALINA_OPTS all options, that should be used by Tomcat and also by the stop process, the version command etc. Most options should go into CATALINA_OPTS.
JAVA_ENDORSED_DIRS (Optional) Lists of of colon separated directories containing some jars in order to allow replacement of APIs created outside of the JCP (i.e. DOM and SAX from W3C). It can also be used to update the XML parser implementation. Defaults to $CATALINA_HOME/endorsed.
JPDA_TRANSPORT (Optional) JPDA transport used when the “jpda start” command is executed. The default is “dt_socket”.
JPDA_ADDRESS (Optional) Java runtime options used when the “jpda start” command is executed. The default is 8000.
JPDA_SUSPEND (Optional) Java runtime options used when the “jpda start” command is executed. Specifies whether JVM should suspend execution immediately after startup. Default is “n”.
JPDA_OPTS (Optional) Java runtime options used when the “jpda start” command is executed. If used, JPDA_TRANSPORT, JPDA_ADDRESS, and JPDA_SUSPEND are ignored. Thus, all required jpda options MUST be specified. The default is:
CATALINA_PID (Optional) Path of the file which should contains the pid of the catalina startup java process, when start (fork) is used
LOGGING_CONFIG (Optional) Override Tomcat’s logging config file Example (all one line) LOGGING_CONFIG=”-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties
LOGGING_MANAGER (Optional) Override Tomcat’s logging managerExample (all one line)
To run Tomcat you can use catalina.sh script with different options:
start: The Tomcat process is started in its own shell/session. Instead of that command you can run: startup.sh
run: The Tomcat process is started in current shell/session, the startup process output will be printed on the console, and the execution will be stopped on session close or on ctrl+c.
To test a performance of multiple parallel file downloads, I had to make sure that a download takes significant amount of time. I could use huge files but that’s not very helpful if you work on a local, 1Gb LAN. So I’ve decided to limit download speeds from my Apache server to my PC. Here we go.
1. Mark packages to be throttled, in my case those originating from port 80
2. Use tc utility to limit traffic for the packages marked as above (handle 100):
3. That’s it, you can monitor/check your rules with:
Some time ago I had to porcess a lot of images in a simple way – remove the top and bottom part of them. It was not a task I could automate – the amount of image I had to cut from the top & bottom varied for each photo. To make the mundane work a bit easier, I’ve created a script – python plugin.
The script assumes you have put two guide lines onto the image. It finds them, cuts the image from between them and saves as a new file.
To create such a simple script in python you need to:
import gimpfu
run register method that tells gimp (among other things), a function name that implements the script (special_crop) and where to put a link to the script in gimp menu (<Image>/Filters)
implement your function
copy script to your custom scripts folder (e.g. /home/…/.gimp-2.6/plug-ins)
The other locations you could use when choosing where in menu system a script should appear are:
“<Toolbox>”, “<Image>”, “<Layers>”, “<Channels>”, “<Vectors>”, “<Colormap>”, “<Load>”, “<Save>”, “<Brushes>”, “<Gradients>”, “<Palettes>”, “<Patterns>” or “<Buffers>”
And finally, the script itself. It’s fairly self-explanatory – enjoy and happy gimping!
Sometimes it can be useful to monitor performance of Java Virtual Machine (VM) on remote host. To do so, a very nice tool – VisualVM – can be used. It can be run on local host and get information from jstatd running on a remote host. In addition, VisualVM comes with a number of useful plugins. This blog describes how to run VisualVM with VisualGC, which is Visual Garbage Collection Monitoring Tool to monitor Tomcat on remote machine. However, the solution can be also applied to other applications running on JavaVM.
Remote machine
Run Tomcat
Add the following options to CATALINA_OPTS variable to enable JMX support in Apache Tomcat.
We want to monitor Tomcat instance running on remote machine. To check whether it is running use:
The above command run on my remote machine returns the following:
Notice that pid is ’28743′.
To get JavaVM process status you can run jps command, which is Java Virtual Machine Process Status Tool. jps is located is your Java JDK HOME/bin directory. Description of jps command can be found here. Note that jps returns only Java processes run by the user, who runs jps. To get list of all Java processes run sudo jps. See examples below.
Outcome of jps run on my remote machine:
Outcome of sudo jps run on my remote machine:
Notice that lvmid for Tomcat (in this case – Bootstrap) is ’28743′ which is the same as pid.
Hostname
Run
to check the host name, e.g., agile003.
Make sure that in /etc/hosts file this hostname has IP by which it is visible to the machine that will be running VisualVM (local machine), e.g., 192.168.1.20 agile003.
Run jstat Deamon
jstatd, which is jstat Daemon can be found in Java JDK HOME/bin. As described in documentation to jstatd, which can be found here: , create a file called jstatd.policy in any directory you choose, e.g., /home/joanna. The file should contain the following text:
Run jstatd using the following command. Make sure you run it with root permissions.
You can add the following flag to log the calls and see what is going on:
Local machine
Check access to jstatd
Run:
in my case
You should see the same output as for sudo jps running on remote machine.
Run VisualVM
Run:
or
jvisualvm can be located in your Java JDK HOME/bin directory also it can be downloaded from here: JVM download
Go to
and install the plugins you require including VisualGC plugin. My selection is presented in the screen shot below.
Restart VisualVM.
Add remote host – one that jstatd is running on.
give IP of the host, e.g., 192.158.1.20. The same IP should be set for the host name in /etc/hosts on remote server.
Now you should be able to access the Java processes running on remote host including Tomcat as presented below.
All tabs including VisualGC on right hand site should now show appropriate graphs. See sample screen shot below:
This guide will describe how to serve git repository on HTTP port using Apache. This should work on any recent Ubuntu or Debian release, I’ve tested it on Ubuntu Server 11.10. I’m setting it up on my local server 192.168.1.20 under git/agilesparkle, so my repository will be available at http://192.168.1.20/git/agilesparkle. I want it to be password protected but with only single user with following credentials: myusername/mypassword.
Server side
I assume you have Apache installed already. Switch to root account so we won’t need to add sudo all the time and install git:
Create directory for your git repository:
Create bare git repository inside and set the rights so Apache has write access:
Enable dav_fs module. This will automatically enable dav module as well:
Configure Apache to serve git repository using dav:
Unless provided explicitly, Java VM will set up several performance-related options depending on current environment. This mechanism is called ergonomics. You can see what defaults would be used on the machine by invoking:
The decision on the settings will be made based on the number of processors and total memory installed in the system. On my 32bit EeePC with 2 processors (as visible by OS) and 2GB memory, the output is:
And just for comparison, the output from Oracle Java 7:
On 64bit system with 8 CPUs and 16GB memory, the output is:
Oracle Java 7 again gives exactly the same ergonomics defaults.
I’ve been working for some time on rewriting Global Search feature for Moodle. This is basically a search functionality that would span different regions of Moodle. Ideally it should allow to search everywhere within Moodle: forums, physical documents attached as resources, etc. The implementation should work in PHP, so as a search engine I’ve decided to use Zend’s implementation of Lucene. The library unfortunately doesn’t seem to be actively maintained – there were very few changes in SVN log – practically there was no development of Search Lucene since November 2010 (few entries in 2011 are just fixing typos or updating copyright date). The bug tracker is also full of Lucene issues and very little activity.
Having said that, I didn’t find any other search engine library implemented natively in PHP, so Zend_Search_Lucene it is! (please, please let me know if you know any alternatives)
Zend Lucene indexing performance-related settings
There are only 2 variables that can be changed to affect the performance of indexing:
$maxBufferedDocs
$maxMergeDocs
maxBufferedDocs
From the documentation:
Number of documents required before the buffered in-memory
documents are written into a new Segment
Default value is 10
This simply means that every $maxBufferedDocs times you use addDocument() function, the index will be commited. Commiting requires obtaining write lock to the Lucene index.
So it should be straightforward: the smaller the value is, the less often index is flushed – therefore: overall performance (e.g. number of documents indexed per second) is higher but the memory footprint is bigger.
maxMergeDocs
The documentation says:
mergeFactor determines how often segment indices are merged by addDocument().
With smaller values, less RAM is used while indexing, and searches on unoptimized indices are faster,
but indexing speed is slower.
With larger values, more RAM is used during indexing, and while searches on unoptimized indices are slower,
indexing is faster.
Thus larger values (> 10) are best for batch index creation,
and smaller values (< 10) for indices that are interactively maintained.
So it seems it’s pretty simple – for initial indexing we should set maxMergeDocs as high as possible and then lower it when more content is added to the index later on. With maxBufferedDocs we should simply find a balance between speed & memory consumption.
Testing indexing speed
I’ve tested various settings with my initial code for Global Search. As a test data I’ve created Moodle site with 1000 courses (really 999 courses as I didn’t use course id=1 – a frontpage course in Moodle). Each course has 10 sections and there is 1 label inside each section. That is: 10 labels per course (note: number of courses and sections is not really relevant for testing indexing speed).
Each label is about 10k long simple HTML text randomly generated, based on the words from “Hitchhiker’s guide to the galaxy”. Here is a fragment of a sample label text (DB column intro):
<h2>whine the world, so far an inch wide, and</h2>
<h2>humanoid, but really knows all she said. - That</h2>
<span>stellar neighbour, Alpha Centauri for half an interstellar distances between different planet. Armed intruders in then turned it as it was take us in a run through finger the about is important. - shouted the style and decided of programmers with distaste at the ship a new breakthrough in mid-air and we drop your white mice, - of it's wise Zaphod Beeblebrox. Something pretty improbable no longer be a preparation for you. - Come off for century or so, - The two suns! It is. (Sass:</span>
[...9693 characters more...]
The intro and the name of a label is index. The total amount of data to index is about 100MB, exactly: 104,899,975 (SELECT SUM( CHAR_LENGTH( `name` ) ) + SUM( CHAR_LENGTH( `intro` ) ) FROM `mdl_label`) in 9990 labels. (Note for picky ones: no, there are no multi-byte characters there).
I’ve tested it on my local machine running: 64 bit Ubuntu 11.10, apache2-mpm-prefork (2.2.20-1ubuntu1.2), mysql-server-5.1 (5.1.61-0ubuntu0.11.10.1), php5 (5.3.6-13ubuntu3.6) with php5-xcache (1.3.2-1). Hardware: Intel Core i7-2600K @ 3.40GHz, 16GB RAM.
The results:
Time
maxBufferedDocs
mergeFactor
1430.1
100
10
1464.7
300
400
1471.1
200
10
1540.9
200
100
1543.3
300
100
1549.7
200
200
1557.5
100
5
1559.3
300
200
1560.4
300
300
1577.0
200
300
1578.9
50
10
1581.5
200
5
1584.6
300
50
1586.6
300
10
1589.3
200
50
1591.2
200
400
1616.7
100
50
1742.2
50
5
1746.4
400
5
1770.7
400
10
1776.1
300
5
1802.3
400
50
1803.9
400
200
1815.7
50
50
1830.7
400
100
1839.4
400
400
1854.9
100
300
1870.1
400
300
1894.1
100
100
1897.2
100
200
1909.7
100
400
1924.4
10
10
1955.1
10
50
2133.4
5
10
2189.0
10
5
2257.6
10
100
2269.8
50
100
2282.7
5
50
2393.5
5
5
2466.8
5
100
2979.4
10
200
3146.8
5
200
3395.9
50
400
3427.9
50
200
3471.9
50
300
3747.0
10
300
3998.1
5
300
4449.8
10
400
5070.0
5
400
The results are not what I would expect – and definitely not what the documentation suggests: increasing both values should decrease total indexing time. In fact, I was so surprised that the first thing I suspected was that my tests were invalid because of something on the server affecting the performance. So I’ve repeated few tests:
First test
Second test
maxBufferedDocs
mergeFactor
1430.1
1444.9
100
10
1464.7
1490.6
300
400
1471.1
1491.1
200
10
1540.9
1593.5
200
100
1894.1
1867.7
100
100
1924.4
1931.2
10
10
1909.7
1920.4
100
400
5070.0
5133.3
5
400
The tests look OK! Here is a 3d graph of the results (lower values are better):
Explaining the results would require more analysis of the library implementation but for end-users like myself, it makes the decision very simple: maxBufferedDocs should be set to 100, mergeFactor to 10 (default value). As you can see on the graph, once you set maxBufferedDocs to 100, both settings don’t really make too much of a difference (the surface is flat). Setting both higher will only increase the memory usage.
With those settings, on the commodity hardware, the indexing speed was 71kB text per second (7 big labels per second). The indexing process is clearly cpu bound, further optimization would require optimizing the Zend_Search_Lucene code.
Testing performance degradation
The next thing to check is does the indexing speed degrade over the time. The speed of 71 kB/sec may be OK but if it degrades much over the time, then it may slow down to unacceptable values. To test it I’ve created ~100k labels of the total size 1,049,020,746 (1GB) and run the indexer again. The graph below shows the times it took to add each 1000 documents.
The time to add a single document is initially 0.05 sec and it keeps growing up to 0.15 at the end (100k documents). There is a spike every 100 documents, related to the value of maxBufferedDocs. But there are also bigger spikes in processing time 1,000 documents, then even bigger every 10,000. I think that this is caused by Zend_Lucene merging documents into single segment but I didn’t study the code deeply enough to be 100% sure.
It took in total 5.5h to index 1GB of data. The average throughput dropped from 73,356 bytes/sec (when indexing 100MB) to 53,903 bytes/sec (indexing 1GB of text).
The bottomline is that the speed of indexing keeps decreasing as the index grows but not significantly.
The last thing to check is the memory consumption. I checked the memory consumption after every document indexed then for each group of 1000 document I graphed the maximum memory used (the current memory used will keep jumping).
The maximum peak memory usage does increase but very slowly (1MB after indexing 100k documents).
This post describes how to configure workflow in Alfresco framework using Activiti engine. Instructions on how to set up the development environment can be found here. More information about workflows in Alfresco can be found on Alfresco wiki page. I also found the following very useful:
Before you start make sure that you have Activiti BPMN 2.0 designer, which is an Eclipse plugin. This makes edition of workflow models easier.
Workflow description
We are going to create the following workflow:
In a software development company there are 3 teams: sales, management, and developers. Sales team has to have information about work estimates, e.g., number of development days. When development work is required, sales person contacts management and requests estimate for the work. More accurate description of the work (say scope document) is stored in external (to Alfresco) Project Management systems. One of the managers sends the request to developer to do the estimate for the work. The estimate comes back to the manager and after approval is returned to the sales person. If there is no approval for the estimate it comes back do the developer to make some changes. This workflow is presented in the following picture created in Activiti BPMN 2.0 designer.
Workflow is started by a sales person.
‘Assign Estimate Task’ is done by a manager.
‘Estimate Task’ is done by a developer.
‘Review Estimate’ is done by the manager requesting estimate.
‘Estimate Approved’ is done by the sales person requesting estimate.
The following XML file is associated with the workflow above:
Let’s name this XML file estimate.bpmn20.xml.
Integration of Activiti workflows in Alfresco
There are several points of integration Activiti framework with Alfresco. They are described below.
Script execution listeners
When workflow is started then the script execution starts. There is common execution object accessible through all the script execution (from start to end) to share information between script tasks. Execution object has org.alfresco.repo.workflow.activiti.listener.ScriptExecutionListener defined. There are 3 events defined for the listener: start, end, and take. The events can be used to execute some code within the workflow. In the workflow model (estimate.bpmn20.xml) it is possible to define script execution listeners in extensionElements tags and use JavaScript API and JavaScript Services API within the code to be run.
When workflow starts we want to obtain full name of ‘Management’ group and save it in script variable wf_managementGroup to be accessible in all the tasks within the workflow. The following piece of code shows how to do it. groups.getGroup is defined in JavaScript Services API and is used to get names of all the groups.
Script task listeners
For each task in the workflow, where user action is required (userTask tag), except script execution listener it is possible to use script task listener. When the task starts new task object is created for each task and this object is accessible during the task execution. There are 3 events defined for org.alfresco.repo.workflow.activiti.tasklistener.ScriptTaskListener: create, assignment, and complete.
Let’s say that on the task start we want to assign some script execution variables to task variables. The following piece of code shows how to do it. In the example below script execution variable bpm_workflowDueDate, which corresponds to due date of the workflow is assigned to dueDate property of the task. Note that due date for workflow can be different from due date of task.
Connection between task in the flow and appropriate form in Alfresco share
To display each task in Alfresco share workflow forms are used. They are configured in share-workflow-form-config.xml file. In the workflow model (estimate.bpmn20.xml) it is possible to put attribute activiti:formKey in userTask tag, which is points to appropriate form defined in Alfresco share. To connect user tasks with forms it is necessary to define new type and use its name in workflow model and corresponding form configuration.
Let’s say that we defined wf:assignEstimateTask type. The type overrides bpm:packageActionGroup property of bpm:workflowTask type and has 2 mandatory aspects: bpm:assignee and wf:workInfo. The corresponding code is presented below.
This type is linked with workflow model using activiti:formKey attribute.
In the form configuration (share-workflow-form-config.xml) there is filter which displays form depending on the task type. The forms identify which properties of the type should be displayed and where/how to display them. The sample form configuration is presented below.
This form is displayed as presented below:
Please note that wf:workDescription is property of wf:workInfo aspect, which is defined as follows.
Assignment to the task
There are 2 attributes that describe person/group that is responsible for the task. Both of them are set when task is being created and indicate, for each users it should be displayed. activiti:assignee identifies single user that is assigned to the task. activiti:candidateGroups identifies groups of users that claim the task.
Sample definitions of user/group assignment are presented below.
Decision in the workflow
In the workflow presented it is possible to make decision in ‘Review Estimate’ task whether estimate should be accepted or rejected. It is important to know that when sequenceFlow tags from workflow model (estimate.bpmn20.xml) are evaluated first matching one is executed. There are 2 flows (flow6 and flow8) from ‘Review Estimate’ task that either point to ‘Estimate Task’ or to ‘Estimate Approved’ task. To follow appropriate route there should be condition set on the first of them. In that way when condition is true first sequence will fire. If not, the second one will be executed. That condition can depend on some script execution variable, e.g., wf_reviewOutcome that was set in task execution listener on complete event. The code below presents part of workflow model which sets and uses variable responsible for workflow path choice.
After all the introduction let’s go through all the steps necessary to define the workflow.
Step 1 Define workflow model and customize it
Define workflow model in Activiti BPMN 2.0 designer and save it as alfresco/WebContent/WEB-INF/classes/alfresco/workflow/estimate.bpmn20.xml or as WEB-INF/classes/alfresco/workflow/estimate.bpmn20.xml in your Alfresco deployment directory on Tomcat server. Update the model to set necessary variables and point to appropriate types. Full listing of the file estimate.bpmn20.xmlused in this example is presented below.
Step 2 Define types
Next step is to define types used in the model in activiti:formKey attribute. The types can:
inherit properties from other types (parent tag)
override inherited properties (overrides tag)
define new properties (properties tag)
add aspects to the types (mandatory-aspects tag)
To note, the task properties have to be set at the beginning of each task, because new task object is created each time new task starts. Therefore the following line is present in example above if (typeof bpm_workflowDueDate != 'undefined') task.dueDate = bpm_workflowDueDate;. Aspect objects once created are not destroyed until the end of script execution. This is good way to keep some variables that once set are applicable to all the tasks like in our case ‘link to the document’. The properties from aspects can be used in the form configuration. Definition of all the types used in the example is presented below. They were saved in the following location: /alfresco/WebContent/WEB-INF/classes/alfresco/workflow/workflowModel-custom.xml
Step 3 Define workflow forms
For each task in the workflow it is possible to configure page to be shown. Is is done using config tag and evaluator and condition attributes. Part of /share/WebContent/WEB-INF/classes/alfresco/share-workflow-form-config.xml used in Alfresco share and responsible for rendering appropriate workflow forms is presented below. Please note, that first step of the flow does not have type defined therefore instead of task-type eveluator string-compare eveluator should be used evaluator="string-compare" condition="activiti$estimate", where ‘estimate’ is process id (id="estimate" name="estimate") defined in estimate.bpmn20.xml file. For all the other steps evaluator="task-type" is used and condition corresponds to type defined for particular step.
Step 4 Translate messages
All the translations are defined in /alfresco/WebContent/WEB-INF/classes/alfresco/workflow/workflow-messages_xx.properties
Step 5 Add new configuration files to Alfresco bootstrap sequence
Bootstrap configuration is defined in /alfresco/WebContent/WEB-INF/classes/alfresco/bootstrap-context.xml file. Please add path to workflow model file (alfresco/workflow/estimate.bpmn20.xml), path to model file with new types (alfresco/workflow/workflowModel-custom.xml ), and labels with translation if used instead of workflow-messages file.
Step 6 Start Alfresco and Alfresco Share
Log in as administrator to Alfresco. Otherwise you may not be able to connect to workflow console.
Step 7 Deploy type definition in Alfresco workflow console
Go to: http://localhost:8080/alfresco/faces/jsp/admin/workflow-console.jsp
To deploy estimate workflow definitions type:
If there was other version of estimate workflow definition it might be necessary to redeploy it. To do so you have to remove all the workflows that use it and then remove estimate workflow definition. This will ensure that there are no estimate workflow definition versions present in the system. Deploy estimate workflow definitions type.
This console might be handy for some workflow debugging.
Zim is a desktop wiki I highly recommend. Recently I’ve switched to another desktop and after copying my whole home directory, zim default notebooks did not open anymore. This was because I’ve changed my username, which has caused a change in the location of my zim notebooks. This can be easily fixed in the zim configuration file, you will find it at this location:
~/.config/zim/notebooks.list
Simply change the paths in this config file, the format is very straightforward (hint: vim command to substitute oldusername with newusername in the whole file would be: :%s/oldusername/newusername).