Moodle’s performance information and events

Moodle’s performance information (can be enabled in Site administration -> Development -> Debugging) is very useful for troubleshooting your site performance. Sometimes just one look there will tell you where the (performance) problem is. But sometimes not.

The limitation of the current implementation is that the time and resources used by event-triggered logging action are not taken into account.

Consider the following Moodle page:

<?php
require_once('config.php');

$PAGE->set_context(context_system::instance());
$PAGE->set_url('/log.php');

$event = \core\event\user_login_failed::create(
           ['other' => ['username' => 'Test','reason' => 'No reason!']]
         );
$event->trigger();

echo $OUTPUT->header();
echo $OUTPUT->footer();

See the sample footer with the performance information:

Moodle_performance_information_footer

It tells us that there were no DB writes performed. This can not be true, because there was a new row added to in mdl_logstore_standard_log table.

Let’s check if the time spent by the logging function is covered. I have edited function insert_event_entries in admin/tool/log/store/standard/classes/log/store.php - injected sleep(10); there:

    protected function insert_event_entries($evententries) {
        global $DB;

        sleep(10);
        $DB->insert_records('logstore_standard_log', $evententries);
    }

After refresh - the information from the footer does not change! Moodle tells me it took 0.133499 secs to generate it, while it obviously took more than 10 seconds. Browser debugger confirms it:

Moodle_performance_information_browser

I have logged that in Moodle tracker MDL-68817.


How does the session locks work in Moodle (part 3)?

The problem with the session locks is that it may often cause the performance issues for the end users. Imagine that a teacher runs some complicated report. The report generation takes more than one minute. After just few seconds, the teacher gets bored and opens the course page in a new tab instead of waiting for the report. Loading the course page takes forever - this is because the web server has locked the session for the report generation. End-user experience is poor - they will complain that course loading page took a long time (and it did - even though the root reason for this was poor performing custom report).

Triggering \core\session\manager::write_close() will help a bit - we release the lock as soon as we can.

But we can go one step further here - mark some scripts (pages) as no needing the session lock at all. That is a new feature in Moodle 3.9 - READ_ONLY_SESSION implemented in MDL-58018. We promise here that the script will not write to the session.

We do it by declaring READ_ONLY_SESSION before including config.php:

define('READ_ONLY_SESSION', true);
require_once("config.php");

echo "Time start: " . date("H:i:s") . "<br />";
sleep(5);
echo "Time end: " . date("H:i:s") . "<br />";

The pre-requisites for the read only sessions are:

  • Enable in config.php by setting:
$CFG->enable_read_only_sessions = true;
  • Change sessions handling to DB. At the moment read only sessions are implemented for DB, Redis and memcached. The work on adding the support for file-bases sessions continues in MDL-68522.

After running page1.php and page2.php one by one, I get the results:

Time start: 17:20:53
Time end: 17:20:58
Time start: 17:20:54
Time end: 17:20:59

No lock at all! The second script started running as soon as my request was received.

The new functionality is currently used in 2 places:

  • lib/ajax/getnavbranch.php - no need to wait for the session lock just to get the navigation menu entries.
  • lib/ajax/service.php - the script starts with no session lock and will continue in this mode if the service being called has readonlysession set to true.

So - get the latest Moodle 3.9, enable $CFG->enable_read_only_sessions, use DB or Redis for your sessions and enjoy the benefits of no locking session calls.


How does the session locks work in Moodle (part 2)?

In part 1 we have learned how the session locks work in PHP. Now let’s apply it in the Moodle context.

We start with the simplest possible Moodle page. We only bootstrap Moodle, start, sleep for 5 seconds and finish.

I have created page1.php and page2.php both with the same content:

require_once("config.php");

echo "Time start: " . date("H:i:s") . "<br />";
sleep(5);
echo "Time end: " . date("H:i:s") . "<br />";

And then I have opened both in separate tabs. One after another. The results:

Time start: 17:21:50
Time end: 17:21:55
Time start: 17:21:55
Time end: 17:22:00

We can clearly see that the second script stated running only after the first one has finished. This is because Moodle has initiated the session while bootstraping. It happened when we included config.php.

Now let’s imitate the session_close() mechanism - in Moodle we can use manager::write_close():

require_once("config.php");

echo "Time start: " . date("H:i:s") . "<br />";
sleep(2);
core\session\manager::write_close();
sleep(3);
echo "Time end: " . date("H:i:s") . "<br />";

Now when I run both scrips:

Time start: 18:53:47
Time end: 18:53:52
Time start: 18:53:49
Time end: 18:53:54

I have launched both of them in the same second. This time my code in the second script started just 2 seconds after the first one.

Closing the session is used in Moodle in several places. For example the last lines of the code in file.php - a script that is used to send a file to user, are:

// ========================================
// finally send the file
// ========================================
\core\session\manager::write_close(); // Unlock session during file serving.
send_stored_file($file, null, $CFG->filteruploadedfiles, $forcedownload);

This makes sense - imagine what would happen if sending a big file for the download would keep the session locked! A teacher would not be able to access any other page while downloading that big PDF assignment file.

The default and simplest implementation of session and session locks in your LAMP stack is the default PHP implementation - which means file based sessions. Moodle overwrites the location of the PHP session files, and you can find them in moodle_data/sessions directory. This is implemented in \core\session\file class.

If I run page1.php and page2.php at the same time, and I know the name of the session file, I can interrogate it on the server side:

 sudo fuser sess_k8pm2bug3rkk93ldhvu47aih62 
/opt/data/vanilla39/sessions/sess_k8pm2bug3rkk93ldhvu47aih62:  9944  9953

 sudo lsof sess_k8pm2bug3rkk93ldhvu47aih62  
COMMAND  PID     USER   FD   TYPE DEVICE SIZE/OFF     NODE NAME
apache2 9944 www-data   13u   REG  259,6     4900 16132983 sess_k8pm2bug3rkk93ldhvu47aih62
apache2 9953 www-data   13uW  REG  259,6     4900 16132983 sess_k8pm2bug3rkk93ldhvu47aih62

fuser is showing me the IDs of the processes using the file. From lsof we can see that process 9953 has currently the session lock - as it has the writeable flag (W) set. So apache2 process 9944 must be waiting for the lock.

As administrator, you can go to “Site administration -> Server -> Session handling” and set “Use database for session information” (dbsessions). This will change the implementation of sessions to class:
\core\session\database

Now the session locks are handled by the database. Each driver will implement its own method, for example MySQL uses its RELEASE_LOCK() and PostgreSQL uses pg_advisory_lock() function.

There are other implementations of sessions, like memcached or redis. To switch to those, you need to configure $CFG->session_handler_class in config.php.

Have a look at the last part 3.


How does the session locks work in Moodle (part 1)?

The “problem” of session locking is not specific to Moodle. In general, the way web applications work is that any authenticated user will have his session data stored somehow on the server. At the beginnig of the HTTP request, his session data is retrieved and at the end of the request stored back.

What happens when the same user sends second request, before the first one finishes?

If PHP allowed the access to the same session for more than 1 request, then the data could easily get corrupted - by 2 separate processed reading & writing to it in parallel. So instead, the session is locked. The second request will get the session data only after the first requests finishes (and his session data is safely written).

To illustrate it - lets say we have 2 PHP scripts: sleep1.php and sleep2.php. They are exaclty the same and all they do is sleep for 5 seconds:

echo "Time start: " . date("H:i:s") . "<br />";
sleep(5);
echo "Time end: " . date("H:i:s") . "<br />";

When I run them in the browser, one after another, in separate browser tabs I get the results:

Time start: 20:11:17
Time end: 20:11:22
Time start: 20:11:18
Time end: 20:11:23

I have run the second tab 1 second after the first one, both took 5 seconds to finish. Simple.

Now I replace the code of both with a call to session_start():

echo "Time start: " . date("H:i:s") . "<br />";
session_start();
echo "After session_start(): " . date("H:i:s") . "<br />";
sleep(5);
echo "Time end: " . date("H:i:s") . "<br />";

And just like before I open sleep1.php in the first tab and sleep2.php second:

Time start: 20:25:33
After session_start(): 20:25:33
Time end: 20:25:38
Time start: 20:25:34
After session_start(): 20:25:38
Time end: 20:25:43

I clicked to open sleep1.php at 20:25:33. It has acquired the session immediately at 20:25:33 and then kept running until 20:25:38. Meanwhile, the second script was run at 20:25:34 but the call to session_start() has blocked it until 20:25:33 - exactly until the time the first script finished. Then, after getting the session, sleep1.php has run for 5 seconds until 20:25:43.

To alleviate the problem caused by the session lock, we can voluntarily release the lock before we finish the script. Imagine that during those 5 seconds (we sleep now - but let’s pretend this is some work done), we can write back to session what we need after the first 2 seconds.

Then, during the remaining 3 seconds we do some other processing, that we know will not need to update the user’s session. To release the lock earlier, we can use session_write_close().

Let’s extend our scripts:

echo "Time start: " . date("H:i:s") . "<br />";
session_start();
echo "After session_start(): " . date("H:i:s") . "<br />";
sleep(2);
session_write_close();
echo "After session_write_close(): " . date("H:i:s") . "<br />";
sleep(3);
echo "Time end: " . date("H:i:s") . "<br />";

The result now:

Time start: 16:28:24
After session_start(): 16:28:24
After session_write_close(): 16:28:26
Time end: 16:28:29
Time start: 16:28:25
After session_start(): 16:28:26
After session_write_close(): 16:28:28
Time end: 16:28:31

That’s better! Previously the time from start to end for script2.php was 9 seconds - 4 seconds waiting for the lock + 5 seconds of its own processing. Now the wait for the lock down to 1 second and script2.php run took 6 seconds. The second script gets the lock as soon as session_write_close() is called in the first one.

The session locking is sometimes problematic in applications like Moodle - because nearly all HTTP requests in Moodle come from authenticated users. Those users have the session created for them and therefore the locking described above affects them.

Move on to part 2.


How to change the time after which unused LXD images are deleted?

When downloaded for the first time, LXD images are cached in /var/cache/lxc/download directory. If they are unused for images.remote_cache_expiry days, they are deleted. The default value is 10 (days).

To change it to something higher:

sudo lxc config set images.remote_cache_expiry 20

How to delete several remote git branches?

Bash to the rescue - a quick and dirty way. First check what will be deleted - just echo the “git push…” commands. In the example below we will look for all the branches with OLD_BRANCH in the name.

git branch -r | grep OLD_BRANCH | cut -d'/' -f 2- | xargs -L 1 echo git push origin --delete

If everything looks OK, then actually run the commands:

git branch -r | grep OLD_BRANCH | cut -d'/' -f 2- | xargs -L 1 git push origin --delete

Install dependencies:

sudo apt build-dep wesnoth
sudo apt install libsdl2-dev libsdl2-image-dev libsdl2-mixer-dev libsdl2-ttf-dev libboost-all-dev libvorbis-dev libcairo2-dev libpango1.0-dev libssl-dev libreadline-dev cmake make scons pkgconf

Download the source code from https://wiki.wesnoth.org/Download, extract and compile.

tar -xf wesnoth-1.14.9.tar.bz2
cd wesnoth-1.14.9
scons

To play the game, you don’t need to install it, simply type in the same direcroty:

./wesnoth

Enjoy!


How many lines of code does Moodle core have?

The latest release - 3.4 has about 800k non-comment, non-blank lines of PHP code and about 90.7k non-comment, non-blank lines of JS code.

How did that change from version to version?

See the metrics from Moodle 2.6 to Moodle 3.4 for the PHP code:

Moodle_LOC_PHP

and for the JavaScript code:

Moodle_LOC_JS

How was that calculated?

To analyze the core Moodle I have removed from it: 3-rd party libraries, “build” and “lang” directories. I have then used cloc to calculate the number of non-comment and non-empty lines of code.


The problem

We have a typical web application that is deployed behind a load balancer with several web servers behind. The web servers are nearly identical - running the same OS, code, packages. But they do differ in one way - some have a quick access to the external asset’s repository. Other web servers do have the access as well but it is much slower. It looks something like this:

servers diagram

Moodle context

The web application is Moodle. It’s configured to use file system repository to allow access to the assets server. One of the web servers (web1) is in a co-located network with the fast access to the storage. Access from web2 and web3 is still possible (and configured but much slower). It would be benefictial if all the requests that trigger an access to the external file server were routed through web1 server. Practically it means that we want to send all requests to /repository/* scripts to web1.

The solution

HAProxy can do exactly what we need:

  • If a request is to http:///repository/* then route it to web1.
  • Let’s not compromise high availablility - in case our preferred web1 is down, send the request above to web2 or web3.
  • Send all the other requests to web1 (no point in making web1 dedicated for the repository-requests only), web2 or web3.

We start HAProxy configuration with the frontend section. Custom acl rule called below “url_file” will match the requests starting with the path /repository/. If acl condition is met, we send the request to backend called “file_servers”. Otherwise the request goes to the default backend called “all_servers”.

frontend http-in
	bind *:80

	acl url_file path_beg /repository/

	default_backend all_servers
	use_backend file_servers if url_file

The definition of our backend all_servers. Nothing really unusual here - just we send a little less (16 instead of 32) max conections to the web1 - as this one will be a bit more busy serving additional requests.

backend all_servers
	server web1 10.0.3.225:80 maxconn 16 check
	server web2 10.0.3.148:80 maxconn 32 check
	server web3 10.0.3.50:80 maxconn 32 check

In “file_servers” backend we want to use web1 server only, unless it’s down. Only then other web servers will take over - this is done with a “backup” option of HAProxy configuration:

backend file_servers
	server web1 10.0.3.225:80 maxconn 32 check
	server web2 10.0.3.148:80 maxconn 32 check backup
	server web3 10.0.3.50:80 maxconn 32 check backup

The configuration works as expected. The requests to http:///repository/repository_ajax.php are handled by web1. But when web1 is shut down, web2 and web3 are being used.


Apache service on Ubuntu 17.04 is configured by systemd with PrivateTmp=true option. You can see it in:

$ cat /etc/systemd/system/multi-user.target.wants/apache2.service
[Unit]
Description=The Apache HTTP Server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
Environment=APACHE_STARTED_BY_SYSTEMD=true
ExecStart=/usr/sbin/apachectl start
ExecStop=/usr/sbin/apachectl stop
ExecReload=/usr/sbin/apachectl graceful
PrivateTmp=true
Restart=on-abort

[Install]
WantedBy=multi-user.target

This has some implications - as per documentation it “makes sharing between processes via /tmp or /var/tmp impossible”.

Basically /tmp and /var/tmp will be re-mounted for the apache process after it starts. During service restart directories will be cleaned up and re-created.

In the main filesystem they will be visible under something like: /tmp/systemd-private-3aa821d7448d4b21a2c379847aa20e20-apache2.service-0XM9bk

Let’s see it in action - I will find any of my apache processes running:

$ ps auxf | grep apache
root     11593  0.0  0.1 429640 36440 ?        Ss   15:15   0:00 /usr/sbin/apache2 -k start
www-data 11598  0.0  0.2 439860 83336 ?        S    15:15   0:05  \_ /usr/sbin/apache2 -k start

and query /proc for the mount information:

$ cat /proc/11598/mountinfo | grep /tmp
316 214 8:2 /tmp/systemd-private-3aa821d7448d4b21a2c379847aa20e20-apache2.service-0XM9bk/tmp /tmp rw,relatime shared:291 master:1 - ext4 /dev/sda2 rw,errors=remount-ro,data=ordered
317 214 8:2 /var/tmp/systemd-private-3aa821d7448d4b21a2c379847aa20e20-apache2.service-oKEULP/tmp /var/tmp rw,relatime shared:292 master:1 - ext4 /dev/sda2 rw,errors=remount-ro,data=ordered

$ cat /proc/11598/mounts | grep /tmp
/dev/sda2 /tmp ext4 rw,relatime,errors=remount-ro,data=ordered 0 0
/dev/sda2 /var/tmp ext4 rw,relatime,errors=remount-ro,data=ordered 0 0

And surely enough - a file “cli.html” created by the apache process is then visible for me as:

$ ls /tmp/systemd-private-3aa821d7448d4b21a2c379847aa20e20-apache2.service-0XM9bk/tmp
cli.html