Moodle’s performance information (can be enabled in Site administration -> Development -> Debugging) is very useful for troubleshooting your site performance.
Sometimes just one look there will tell you where the (performance) problem is. But sometimes not.
The limitation of the current implementation is that the time and resources used by event-triggered logging action are not taken into account.
Consider the following Moodle page:
See the sample footer with the performance information:
It tells us that there were no DB writes performed. This can not be true, because there was a new row added to in mdl_logstore_standard_log table.
Let’s check if the time spent by the logging function is covered.
I have edited function insert_event_entries in admin/tool/log/store/standard/classes/log/store.php - injected sleep(10); there:
After refresh - the information from the footer does not change!
Moodle tells me it took 0.133499 secs to generate it, while it obviously took more than 10 seconds.
Browser debugger confirms it:
How does the session locks work in Moodle (part 3)?
The problem with the session locks is that it may often cause the performance issues for the end users.
Imagine that a teacher runs some complicated report. The report generation takes more than one minute.
After just few seconds, the teacher gets bored and opens the course page in a new tab instead of waiting for the report.
Loading the course page takes forever - this is because the web server has locked the session for the report generation.
End-user experience is poor - they will complain that course loading page took a long time (and it did - even though the root reason for this was poor performing custom report).
Triggering \core\session\manager::write_close() will help a bit - we release the lock as soon as we can.
But we can go one step further here - mark some scripts (pages) as no needing the session lock at all.
That is a new feature in Moodle 3.9 - READ_ONLY_SESSION implemented in MDL-58018.
We promise here that the script will not write to the session.
We do it by declaring READ_ONLY_SESSION before including config.php:
The pre-requisites for the read only sessions are:
Enable in config.php by setting:
Change sessions handling to DB. At the moment read only sessions are implemented for DB, Redis and memcached.
The work on adding the support for file-bases sessions continues in MDL-68522.
After running page1.php and page2.php one by one, I get the results:
No lock at all! The second script started running as soon as my request was received.
The new functionality is currently used in 2 places:
lib/ajax/getnavbranch.php - no need to wait for the session lock just to get the navigation menu entries.
lib/ajax/service.php - the script starts with no session lock and will continue in this mode if the service being called has readonlysession set to true.
So - get the latest Moodle 3.9, enable $CFG->enable_read_only_sessions, use DB or Redis for your sessions and enjoy the benefits of no locking session calls.
How does the session locks work in Moodle (part 2)?
In part 1 we have learned how the session locks work in PHP.
Now let’s apply it in the Moodle context.
We start with the simplest possible Moodle page. We only bootstrap Moodle, start, sleep for 5 seconds and finish.
I have created page1.php and page2.php both with the same content:
And then I have opened both in separate tabs. One after another. The results:
We can clearly see that the second script stated running only after the first one has finished.
This is because Moodle has initiated the session while bootstraping. It happened when we included config.php.
Now let’s imitate the session_close() mechanism - in Moodle we can use manager::write_close():
Now when I run both scrips:
I have launched both of them in the same second. This time my code in the second script started just 2 seconds after the first one.
Closing the session is used in Moodle in several places. For example the last lines of the code in file.php - a script that is used to send a file to user, are:
This makes sense - imagine what would happen if sending a big file for the download would keep the session locked!
A teacher would not be able to access any other page while downloading that big PDF assignment file.
The default and simplest implementation of session and session locks in your LAMP stack is the default PHP implementation - which means file based sessions.
Moodle overwrites the location of the PHP session files, and you can find them in moodle_data/sessions directory.
This is implemented in \core\session\file class.
If I run page1.php and page2.php at the same time, and I know the name of the session file, I can interrogate it on the server side:
fuser is showing me the IDs of the processes using the file.
From lsof we can see that process 9953 has currently the session lock - as it has the writeable flag (W) set.
So apache2 process 9944 must be waiting for the lock.
As administrator, you can go to “Site administration -> Server -> Session handling” and set “Use database for session information” (dbsessions).
This will change the implementation of sessions to class:
Now the session locks are handled by the database. Each driver will implement its own method, for example MySQL uses its RELEASE_LOCK() and PostgreSQL uses pg_advisory_lock() function.
There are other implementations of sessions, like memcached or redis. To switch to those, you need to configure $CFG->session_handler_class in config.php.
How does the session locks work in Moodle (part 1)?
The “problem” of session locking is not specific to Moodle.
In general, the way web applications work is that any authenticated user will have his session data stored somehow on the server.
At the beginnig of the HTTP request, his session data is retrieved and at the end of the request stored back.
What happens when the same user sends second request, before the first one finishes?
If PHP allowed the access to the same session for more than 1 request, then the data could easily get corrupted - by 2 separate processed reading & writing to it in parallel.
So instead, the session is locked. The second request will get the session data only after the first requests finishes (and his session data is safely written).
To illustrate it - lets say we have 2 PHP scripts: sleep1.php and sleep2.php. They are exaclty the same and all they do is sleep for 5 seconds:
When I run them in the browser, one after another, in separate browser tabs I get the results:
I have run the second tab 1 second after the first one, both took 5 seconds to finish. Simple.
Now I replace the code of both with a call to session_start():
And just like before I open sleep1.php in the first tab and sleep2.php second:
I clicked to open sleep1.php at 20:25:33. It has acquired the session immediately at 20:25:33 and then kept running until 20:25:38.
Meanwhile, the second script was run at 20:25:34 but the call to session_start() has blocked it until 20:25:33 - exactly until the time the first script finished.
Then, after getting the session, sleep1.php has run for 5 seconds until 20:25:43.
To alleviate the problem caused by the session lock, we can voluntarily release the lock before we finish the script.
Imagine that during those 5 seconds (we sleep now - but let’s pretend this is some work done), we can write back to session what we need after the first 2 seconds.
Then, during the remaining 3 seconds we do some other processing, that we know will not need to update the user’s session.
To release the lock earlier, we can use session_write_close().
Let’s extend our scripts:
The result now:
That’s better! Previously the time from start to end for script2.php was 9 seconds - 4 seconds waiting for the lock + 5 seconds of its own processing.
Now the wait for the lock down to 1 second and script2.php run took 6 seconds.
The second script gets the lock as soon as session_write_close() is called in the first one.
The session locking is sometimes problematic in applications like Moodle - because nearly all HTTP requests in Moodle come from authenticated users.
Those users have the session created for them and therefore the locking described above affects them.
How to change the time after which unused LXD images are deleted?
When downloaded for the first time, LXD images are cached in /var/cache/lxc/download directory.
If they are unused for images.remote_cache_expiry days, they are deleted.
The default value is 10 (days).
We have a typical web application that is deployed behind a load balancer with several web servers behind.
The web servers are nearly identical - running the same OS, code, packages. But they do differ in one way - some have
a quick access to the external asset’s repository. Other web servers do have the access as well but it is much slower.
It looks something like this:
The web application is Moodle. It’s configured to use file system repository
to allow access to the assets server. One of the web servers (web1) is in a co-located network with the fast access to the storage.
Access from web2 and web3 is still possible (and configured but much slower). It would be benefictial if all the requests that trigger an access to the
external file server were routed through web1 server. Practically it means that we want to send all requests to /repository/* scripts to web1.
HAProxy can do exactly what we need:
If a request is to http:///repository/* then route it to web1.
Let’s not compromise high availablility - in case our preferred web1 is down, send the request above to web2 or web3.
Send all the other requests to web1 (no point in making web1 dedicated for the repository-requests only), web2 or web3.
We start HAProxy configuration with the frontend section. Custom acl rule called below “url_file” will match the requests
starting with the path /repository/.
If acl condition is met, we send the request to backend called “file_servers”. Otherwise the request goes to the default
backend called “all_servers”.
The definition of our backend all_servers. Nothing really unusual here - just we send a little less (16 instead of 32)
max conections to the web1 - as this one will be a bit more busy serving additional requests.
In “file_servers” backend we want to use web1 server only, unless it’s down. Only then other web servers will take over - this
is done with a “backup” option of HAProxy configuration:
The configuration works as expected. The requests to http:///repository/repository_ajax.php are handled by web1. But
when web1 is shut down, web2 and web3 are being used.