Just finished nice course available on Coursera. The course is done by DeepLearning.AI & Amazon Web Services. It’s taught by AWS employees: Antje Barth, Shelbee Eigenbrode, Mike Chambers, Chris Fregly.

The course is technical enough to give you overall understangding about how LLMs work and how one can implement them into a production application.

Lecture notes were made available under Creative Commons License - thanks DeepLearning.AI!

Here they are:

  • Generative AI with LLMs Lecture Notes - week 1
  • Generative AI with LLMs Lecture Notes - week 2
  • Generative AI with LLMs Lecture Notes - week 3

We want to run some process in the background and continue script execution. Then at some point we want to stop and wait for this process to finish. We also want to capture the exit code of this process. Once we have PID of the process we put in background, wait function is all we need. It will return the exit code of the process it waited for, even if the process has finished its execution before wait function was called.

#!/usr/bin/env bash

# start sleep process in background
echo Starting sleep
(sleep 5; exit 3) &
# get the pid of the last process run
pid=$!

echo Processing...
sleep 10
echo "sleep must be done by now"
ps aux | grep sleep

# wait for the process to finish
echo Waiting for sleep to finish
wait $pid
echo "Sleep finished with exit code $?"
echo done

Stanford’s researchers have done a great job evaluating some foundation models against their compliance with proposed EU law on AI.

The results are: llm_vs_ai_act_results

Rubrics used and short description of the 12 requirements taken into account:

llm_vs_ai_act_table

1. Data sources (additive)

- \+1: Very generic/vague description (e.g. “Internet data”)
- \+1: Description of stages involved (e.g. training, instruction-tuning)
- \+1: Sizes (relative or absolute) of different data sources
- \+1: Fine-grained sourcing (e.g. specific URLs like Wikipedia, Reddit)

2. Data governance

- 0 points: No discussion of data governance
- 1 point: Vague mention of governance with no concreteness
- 2-3 points: Some grounded discussion or specific protocols around governance related to suitability and/or bias of data sources
- 4 points: Explicit constraint on requiring governance measures to include data

3. Copyrighted data

- 0 points: No description.
- 1 point: Very generic/vague acknowledgement of copyright (e.g. tied to “Internet data”)
- 2-3 points: Some grounded discussion of specific copyrighted materials
- 4 points: Fine-grained separation of copyrighted vs. non-copyrighted data

4. Compute (additive)

- \+1: model size
- \+1: training time as well as number and type of hardware units (e.g. number of A100s)
- \+1: training FLOPs
- \+1: Broader context (e.g. compute provider, how FLOPs are measured)

5. Energy (additive)

- \+1: energy usage
- \+1: emissions
- \+1: discussion of measurement strategy (e.g. cluster location and related details)
- \+1: discussion of mitigations to reduce energy usage/emissions

6. Capabilities and limitations

- 0 points: No description.
- 1 point: Very generic/vague description
- 2-3 points: Some grounded discussion of specific capabilities and limitations
- 4 points: Fine-grained discussion grounded in evaluations/specific examples

7. Risks and mitigations (additive)

- \+1: list of risks
- \+1: list of mitigations
- \+1: description of the extent to which mitigations successfully reduce risk
- \+1: justification for why non-mitigated risks cannot be mitigated

8. Evaluations (additive)

- \+1: measurement of accuracy on multiple benchmarks
- \+1: measurement of unintentional harms (e.g. bias)
- \+1: measurement of intentional harms (e.g. malicious use)
- \+1: measurement of other factors (e.g. robustness, calibration, user experience)

9. Testing (additive)

- \+1 or +2: disclosure of results and process of (significant) internal testing
- \+1: external evaluations due to external access (e.g. HELM)
- \+1: external red-teaming or adversarial evaluation/stress-testing (e.g. ARC)

10. Machine-generated content

- \+ 1-3 points: Disclosure that content is machine-generated within direct purview of the foundation model provider (e.g when using OpenAI API).
- \+1 point: Disclosed mechanism to ensure content is identifiable as machine-generated even beyond direct purview of foundation model provider (e.g. watermarking).

11. Member states

- 0 points: No description of deployment practices in relation to the EU.
- 2 points: Disclosure of explicitly permitted/prohibited EU member states at the organization operation level.
- 4 points: Fine-grained discussion of practice per state, including any discrepancies in how the foundation model is placed on the market or put into service that differ across EU member states.

12. Downstream documentation

- 0 points: No description of any informational obligations or documentation.
- 1 point: Generic acknowledgement that information should be provided downstream.
- 2 points: Existence of relevant documentation, including in public reports, though mechanism for supplying to downstream developers is unclear.
- 3-4 points: (Fairly) clear mechanism for ensuring foundation model provider provides appropriate documentation to downstream providers.

Full list of requirements found by the researchers in AI Act draft:

  • Registry [Article 39, item 69, page 8 as well as Article 28b, paragraph 2g, page 40]. In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonization legislation, should be required to register their high-risk AI system and foundation models in a EU database, to be established and managed by the Commission. This database should be freely and publicly accessible, easily understandable and machine-readable. The database should also be user friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk under Annex IV and keywords. Deployers who are public authorities or European Union institutions, bodies, offices and agencies or deployers
  • Provider name [Annex VIII, Section C, page 24]. Name, address and contact details of the provider.
  • Model name [Annex VIII, Section C, page 24]. Trade name and any additional unambiguous reference allowing the identification of the foundation model.
  • Data sources [Annex VIII, Section C, page 24]. Description of the data sources used in the development of the foundation model.
  • Capabilities and limitations [Annex VIII, Section C, page 24]. Description of the capabilities and limitations of the foundation model.
  • Risks and mitigations [Annex VIII, Section C, page 24 and Article 28b, paragraph 2a, page 39]. The reasonably foreseeable risks and the measures that have been taken to mitigate them as well as remaining non-mitigated risks with an explanation on the reason why they cannot be mitigated.
  • Compute [Annex VIII, Section C, page 24]. Description of the training resources used by the foundation model including computing power required, training time, and other relevant information related to the size and power of the model.
  • Evaluations [Annex VIII, Section C, page 24 as well as Article 28b, paragraph 2c, page 39]. Description of the model’s performance, including on public benchmarks or state of the art industry benchmarks.
  • Testing [Annex VIII, Section C, page 24 as well as Article 28b, paragraph 2c, page 39]. Description of the results of relevant internal and external testing and optimisation of the model.
  • Member states [Annex VIII, Section C, page 24]. Member States in which the foundation model is or has been placed on the market, put into service or made available in the Union.
  • Downstream documentation [Annex VIII, 60g, page 29 as well as Article 28b, paragraph 2e, page 40]. Also, foundation models should have information obligations and prepare all necessary technical documentation for potential downstream providers to be able to comply with their obligations under this Regulation.
  • Machine-generated content [Annex VIII, 60g, page 29]. Generative foundation models should ensure transparency about the fact the content is generated by an AI system, not by humans.
  • Pre-market compliance [Article 28b, paragraph 1, page 39]. A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distribution channels.
  • Data governance [Article 28b, paragraph 2b, page 39]. Process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation.
  • Energy [Article 28b, paragraph 2d, page 40]. Design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system. This shall be without prejudice to relevant existing Union and national law and this obligation shall not apply before the standards referred to in Article 40 are published. They shall be designed with capabilities enabling the measurement and logging of the consumption of energy and resources, and, where technically feasible, other environmental impact the deployment and use of the systems may have over their entire lifecycle.
  • Quality management [Article 28b, paragraph 2f, page 40]. Establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement.
  • Upkeep [Article 28b, paragraph 3, page 40]. Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 1(c) at the disposal of the national competent authorities.
  • Law-abiding generated content [Article 28b, paragraph 4b, page 40]. Train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression.
  • Training on copyrighted data [Article 28b, paragraph 4c, page 40]. Without prejudice to national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
  • Adherence to general principles [Article 4a, paragraph 1, page 142-3]. All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent humancentric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded: a) ‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. b) ‘technical robustness and safety’ means that AI systems shall be developed and used in a way to minimize unintended and unexpected harm as well as being robust in case of unintended problems and being resilient against attempts to alter the use or performance of the AI system so as to allow unlawful use by malicious third parties. c) ‘privacy and data governance’ means that AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. d) ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights. e) ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law. f) ‘social and environmental well-being’ means that AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. For foundation models, the general principles are translated into and complied with by providers by means of the requirements set out in Articles 28 to 28b.
  • System is designed so users know its an AI [Article 52(1) Paragraph 1 - not in the Compromise text, but invoked in 28(b), paragraph 4a, page 40]. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
  • Appropriate levels [Article 28b, paragraph 2c, page 39]. design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development

Source


The paper that kick-started LLM development is attention_is_all_you_need from Google, published in June 2017. It introduced Transformer architecture and “self-attention” mechanism.


A serious weakness has been found in proprietary encryption cipher TETRA (TErrestrial Trunked RAdio) standard, that is used by police and other organizations.

Researchers Carlo Meijer, Wouter Bokslag, and Jos Wetzels from Midnight Blue who discovered the issue, suggest it may be an intentional backdoor.

The description of CVE-2022-24402 is “The TEA1 algorithm has a backdoor that reduces the original 80-bit key to a key size which is trivially brute-forceable on consumer hardware in minutes.”.


I want to find all fonts that support Polish accented characters (ąćęłńóśżź).

First we will use printf to check the unicode number for a character:

printf "%x" \'ą
105

Then use fc-list command with “charset” query:

fc-list ':charset=17a'

/usr/share/fonts/truetype/lato/Lato-Medium.ttf: Lato,Lato Medium:style=Medium,Regular
/usr/share/fonts/truetype/lato/Lato-SemiboldItalic.ttf: Lato,Lato Semibold:style=Semibold Italic,Italic
/usr/share/texmf/fonts/opentype/public/lm/lmmonolt10-oblique.otf: Latin Modern Mono Light,LM Mono Light 10:style=10 Oblique,Italic
/usr/share/fonts/truetype/dejavu/DejaVuSerif-Bold.ttf: DejaVu Serif:style=Bold
...

To see how the font is rendered, you can use gnome-font-viewer:

gnome-font-viewer /usr/share/fonts/truetype/lato/Lato-Medium.ttf

An article on IEEE about project that decided to switch from microservice architecture to monolith.


#!/bin/bash
sudo cgdelete cpu:/testgroup
sudo cgcreate -g cpu:/testgroup

# Test with and without the line below
sudo cgset -r cpu.cfs_quota_us=50000 testgroup

# See how much time do we need to complete the operation
echo "Starting CPU intensive operations"
date
stress-ng --qsort 1 --qsort-ops 200 &

sleep 1

PID=$(pgrep stress-ng-qsort)

sudo cgclassify -g cpu:testgroup $PID

wait
echo "Finished"
date

The results with cfs_quota_us=50000 are below. The top utility shows about 50% CPU usage.

➜ ./cpu_stress.sh                                                                       
Starting CPU intensive operations
czw, 22 wrz 2022, 17:41:33 CEST
stress-ng: info:  [3345901] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info:  [3345901] dispatching hogs: 1 qsort
stress-ng: info:  [3345901] successful run completed in 33.05s
Finished
czw, 22 wrz 2022, 17:42:06 CEST

If the line is commented out (so no limitations apply), top shows 100% CPU usage and the output is:

➜ ./cpu_stress.sh
Starting CPU intensive operations
czw, 22 wrz 2022, 17:42:53 CEST
stress-ng: info:  [3348203] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info:  [3348203] dispatching hogs: 1 qsort
stress-ng: info:  [3348203] successful run completed in 15.98s
Finished
czw, 22 wrz 2022, 17:43:09 CEST

The corresponding file entries in sysfs:

$ cat /sys/fs/cgroup/cpu/testgroup/cpu.cfs_quota_us
50000
$ cat /sys/fs/cgroup/cpu/testgroup/tasks
3353629

systemd_cover

Monitoring files and directories

How to change editor for “systemctl edit command”.

Add to .bashrc / .zshrc:

 export SYSTEMD_EDITOR=vim

Add to /etc/sudoers:

 Defaults	env_keep += "SYSTEMD_EDITOR"

Get / set default target when booting up:

$ systemctl get-default
$ systemctl set-default ...target

Create a new service:

sudo systemctl edit --force --full new.service

Limit CPU usage for user:

sudo systemctl set-property user-1001.slice CPUQuota=10%
sudo systemctl daemon-reload

Limit IO read rate to 1MB/sec for user:

sudo systemctl set-property user-1001.slice BlockIOReadBandwidth="/dev/sda 1M"
sudo systemctl daemon-reload

journald log files:

  • transient in /run/log/journal/
  • persistent in /var/log/journal/ (create directory to enable)

journald log entries since last boot:

journalctl -b

Create new service with MemoryHigh and MemoryMax directives.

$ systemctl edit --force --full memory.service
[Unit]
Description=Simple service to test memory limit.

[Service]
ExecStart=/root/memory.sh
MemoryHigh=1M
MemoryMax=2M

[Install]
WantedBy=multi-user.target

The content of /root/memory.sh:

#!/bin/bash

echo $(date) > /tmp/test.log
a=()

for (( a=1; a<=10; a++ ))
do
    echo Loop $a >> /tmp/test.log
    for (( c=1; c<=600000; c++ ))
    do
        a+=( "abcdefghijklmnopqrstquvxyabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzzabcdefghijklmnopqrstquvxyabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzzabcdefghijklmnopqrstquvxyabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzabcdefghijklmnopqrstquvxyzz" )
    done
done

sleep 10

Start the service:

root@tuxedo:/etc/systemd/system# systemctl daemon-reload
root@tuxedo:/etc/systemd/system# systemctl enable --now memory
root@tuxedo:/etc/systemd/system# systemctl status memory
● memory.service - Simple service to test memory limit.
Loaded: loaded (/etc/systemd/system/memory.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-09-01 22:26:27 CEST; 9s ago
Main PID: 14675 (memory.sh)
Tasks: 1 (limit: 76224)
Memory: 1.9M (high: 1.0M max: 2.0M)
CGroup: /system.slice/_memory.service
└─14675 /bin/bash /root/memory.sh

wrz 01 22:26:27 tuxedo systemd[1]: Started Simple service to test memory limit..

After a while

root@tuxedo:/etc/systemd/system# systemctl status memory
● memory.service - Simple service to test memory limit.
Loaded: loaded (/etc/systemd/system/memory.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Thu 2022-09-01 22:27:31 CEST; 8s ago
Process: 14675 ExecStart=/root/memory.sh (code=killed, signal=KILL)
Main PID: 14675 (code=killed, signal=KILL)

wrz 01 22:26:27 tuxedo systemd[1]: Started Simple service to test memory limit..
wrz 01 22:27:31 tuxedo systemd[1]: memory.service: Main process exited, code=killed, status=9/KILL
wrz 01 22:27:31 tuxedo systemd[1]: memory.service: Failed with result 'signal'.

And in the dmesg:

$ dmesg
[ 5679.682307] Tasks state (memory values in pages):
[ 5679.682308] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[ 5679.682310] [  14675]     0 14675   202158      862  1646592   199134             0 memory.sh
[ 5679.682316] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=/,mems_allowed=0,oom_memcg=/system.slice/_memory.service,task_memcg=/system.slice/_memory.service,task=memory.sh,pid=14675,uid=0
[ 5679.682330] Memory cgroup out of memory: Killed process 14675 (memory.sh) total-vm:808632kB, anon-rss:0kB, file-rss:3448kB, shmem-rss:0kB, UID:0 pgtables:1608kB oom_score_adj:0