<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.hpc.mk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Boris</id>
	<title>wiki.hpc.mk - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.hpc.mk/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Boris"/>
	<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php/Special:Contributions/Boris"/>
	<updated>2026-05-13T14:02:22Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.36.1</generator>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=127</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=127"/>
		<updated>2021-08-31T12:55:10Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot; |Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Authentication_Mechanism#Autentication_Setting|Setting up a user authentication mechanism]]&lt;br /&gt;
#[[Authentication_Mechanism#Autentication_Database|Setting up a local database for storing users and policies]]&lt;br /&gt;
#[[Authentication_Mechanism#Autentication_Database_Config|Database configuration for storing user information]]&lt;br /&gt;
#[[Authentication_Mechanism#Autentication_Partition|Partition model and division of execution space ]]&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database&amp;quot;&amp;gt;Setting up a local database for storing users and policies&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Mariadb is used as a database to store user data. In the initial installation, tables should be created for storing data about the users, their properties, the groups they join as well as the permissions or policies that are allowed accordingly. Then the appropriate service is installed to monitor all these processes, SlurmDBD. The main and basic task of this process involves collecting data from multiple clusters at one location for all user activities.  &lt;br /&gt;
&lt;br /&gt;
Each user's UID is used as the main identifier, which can keep detailed statistics for each task that a user will perform. The authentication is done through the munge service, which checks the list of users on each node (reads the contents of Linux file /etc/passwd) and based it stores the information about every user. There are several different tables in which this data is written, the most important of which are: &lt;br /&gt;
&lt;br /&gt;
* AccountingStorageType - Controlling the steps performed by the task as well as the required resources, &lt;br /&gt;
* JobCompType - Writing data about the tasks, which contains basic information such as name, user who started it, allocated nodes and resources, start time, completion time, output status. This table can be expanded with additional information about the databases it uses (MySQL or MariaDB). &lt;br /&gt;
&lt;br /&gt;
Enrollment control is done by the slurmctld service (the main slurm cluster control process). Thus, potential sensitive data that is available to all users in the process should be properly protected and authenticated. The same applies if the data is sent through a network protocol (TCP, UDP) where full protection should be provided throughout the communication channel. The data stored directly in the database are encrypted with an appropriate protocol that provides security and protection in case of possible system abuses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database_Config&amp;quot;&amp;gt;Database configuration for storing user information&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
In addition to the standard way of storing data in a text file (which is the basic way of storing all SLURM data), in our case we use MariaDB database system. It provides high availability and tolerance in case of unwanted hardware or physical problems, which activates the backups of the databases. Newer versions of MySQL and MariaDB use InnoDB as the main table management service. The most important features when configuring InnoDB include the following parameters: &lt;br /&gt;
* Innodb_buffer_pool_size - The size of bytes in the buffer space, which caches all tables and indexes of the data. Depending on the processor architecture (32 or 64 bits) the maximum value in bytes is set. A higher value requires a larger amount of disk I / O operations when accessing the same table multiple times. In our case that value is set to 1024MB, &lt;br /&gt;
* Innodb_lock_wait_timeout - Time expressed in seconds to execute an InnoDB transaction waiting to access a queue locked by another process. In principle, if a certain transaction (INSERT or UPDATE) waits longer than the above defined, then the transaction is canceled and a roll-back is made to the entire operation. In our case the value is set to 900 seconds (i.e. 15 minutes), &lt;br /&gt;
*Innodb_page_size - The size of the data expressed in data files, with a standard size of 16KB. These files are organized as segments, and if a row in a table is larger than the default value, multiple files are combined into a single segment. Smaller sizes are generally recommended for drives with SSD technology for higher performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Partition&amp;quot;&amp;gt;Partition model and division of execution space&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Slurm architecture involves dividing machines (nodes) into larger logical units, which depending on the resources they use (CPU, RAM, GPU) can be grouped to perform multiple common tasks. The configuration in this way is done through the partition model, which builds a set of related elements with common attributes. When performing the task, each step or sub-step of the task can be located on different partitions, depending on the task. &lt;br /&gt;
The user can further optimize resources by parallel running of the task steps when allocating the task configuration. As an example, a task may allocate all nodes declared for that task or several parts of the task independently use only a small portion of the resource allocation. Such an example is shown in fig. 2, by a division into two main partitions for each different task.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=126</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=126"/>
		<updated>2021-08-31T12:54:51Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
** http://www.mediawiki.org|MediaWiki home&lt;br /&gt;
* HPC&lt;br /&gt;
** Technical_Specification|Technical Specification&lt;br /&gt;
** LustreFS|LustreFS&lt;br /&gt;
** SLURM|Initiate and manage SLURM tasks&lt;br /&gt;
** Authentication_Mechanism|Authentication Mechanism&lt;br /&gt;
** Slurm_Services|Slurm Services&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=125</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=125"/>
		<updated>2021-08-31T12:54:39Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
** http://www.mediawiki.org|MediaWiki home&lt;br /&gt;
* HPC&lt;br /&gt;
** Technical_Specification|Technical Specification&lt;br /&gt;
** LustreFS|LustreFS&lt;br /&gt;
** SLURM|Initiate and manage SLURM tasks&lt;br /&gt;
** Authentication_Mechanism|Autentication Mechanism&lt;br /&gt;
** Slurm_Services|Slurm Services&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Autentication_Mechanism&amp;diff=124</id>
		<title>Autentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Autentication_Mechanism&amp;diff=124"/>
		<updated>2021-08-31T12:54:09Z</updated>

		<summary type="html">&lt;p&gt;Boris: Boris moved page Autentication Mechanism to Authentication Mechanism&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Authentication Mechanism]]&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=123</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=123"/>
		<updated>2021-08-31T12:54:09Z</updated>

		<summary type="html">&lt;p&gt;Boris: Boris moved page Autentication Mechanism to Authentication Mechanism&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot; |Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Setting|Setting up a user authentication mechanism]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database|Setting up a local database for storing users and policies]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database_Config|Database configuration for storing user information]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Partition|Partition model and division of execution space ]]&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database&amp;quot;&amp;gt;Setting up a local database for storing users and policies&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Mariadb is used as a database to store user data. In the initial installation, tables should be created for storing data about the users, their properties, the groups they join as well as the permissions or policies that are allowed accordingly. Then the appropriate service is installed to monitor all these processes, SlurmDBD. The main and basic task of this process involves collecting data from multiple clusters at one location for all user activities.  &lt;br /&gt;
&lt;br /&gt;
Each user's UID is used as the main identifier, which can keep detailed statistics for each task that a user will perform. The authentication is done through the munge service, which checks the list of users on each node (reads the contents of Linux file /etc/passwd) and based it stores the information about every user. There are several different tables in which this data is written, the most important of which are: &lt;br /&gt;
&lt;br /&gt;
* AccountingStorageType - Controlling the steps performed by the task as well as the required resources, &lt;br /&gt;
* JobCompType - Writing data about the tasks, which contains basic information such as name, user who started it, allocated nodes and resources, start time, completion time, output status. This table can be expanded with additional information about the databases it uses (MySQL or MariaDB). &lt;br /&gt;
&lt;br /&gt;
Enrollment control is done by the slurmctld service (the main slurm cluster control process). Thus, potential sensitive data that is available to all users in the process should be properly protected and authenticated. The same applies if the data is sent through a network protocol (TCP, UDP) where full protection should be provided throughout the communication channel. The data stored directly in the database are encrypted with an appropriate protocol that provides security and protection in case of possible system abuses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database_Config&amp;quot;&amp;gt;Database configuration for storing user information&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
In addition to the standard way of storing data in a text file (which is the basic way of storing all SLURM data), in our case we use MariaDB database system. It provides high availability and tolerance in case of unwanted hardware or physical problems, which activates the backups of the databases. Newer versions of MySQL and MariaDB use InnoDB as the main table management service. The most important features when configuring InnoDB include the following parameters: &lt;br /&gt;
* Innodb_buffer_pool_size - The size of bytes in the buffer space, which caches all tables and indexes of the data. Depending on the processor architecture (32 or 64 bits) the maximum value in bytes is set. A higher value requires a larger amount of disk I / O operations when accessing the same table multiple times. In our case that value is set to 1024MB, &lt;br /&gt;
* Innodb_lock_wait_timeout - Time expressed in seconds to execute an InnoDB transaction waiting to access a queue locked by another process. In principle, if a certain transaction (INSERT or UPDATE) waits longer than the above defined, then the transaction is canceled and a roll-back is made to the entire operation. In our case the value is set to 900 seconds (i.e. 15 minutes), &lt;br /&gt;
*Innodb_page_size - The size of the data expressed in data files, with a standard size of 16KB. These files are organized as segments, and if a row in a table is larger than the default value, multiple files are combined into a single segment. Smaller sizes are generally recommended for drives with SSD technology for higher performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Partition&amp;quot;&amp;gt;Partition model and division of execution space&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Slurm architecture involves dividing machines (nodes) into larger logical units, which depending on the resources they use (CPU, RAM, GPU) can be grouped to perform multiple common tasks. The configuration in this way is done through the partition model, which builds a set of related elements with common attributes. When performing the task, each step or sub-step of the task can be located on different partitions, depending on the task. &lt;br /&gt;
The user can further optimize resources by parallel running of the task steps when allocating the task configuration. As an example, a task may allocate all nodes declared for that task or several parts of the task independently use only a small portion of the resource allocation. Such an example is shown in fig. 2, by a division into two main partitions for each different task.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=122</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=122"/>
		<updated>2021-08-31T10:13:45Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
** http://www.mediawiki.org|MediaWiki home&lt;br /&gt;
* HPC&lt;br /&gt;
** Technical_Specification|Technical Specification&lt;br /&gt;
** LustreFS|LustreFS&lt;br /&gt;
** SLURM|Initiate and manage SLURM tasks&lt;br /&gt;
** Autentication_Mechanism|Autentication Mechanism&lt;br /&gt;
** Slurm_Services|Slurm Services&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=121</id>
		<title>Slurm Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=121"/>
		<updated>2021-08-31T10:12:45Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Slurm_Services#Services_Documenting|Documentation for creating SLURM Workload Manager]]&lt;br /&gt;
#[[Slurm_Services#Services_Installation|Required installation and predefined environment ]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Services_Documenting&amp;quot;&amp;gt;Documentation for creating SLURM Workload Manager&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Creating a scheduler in a heterogeneous cluster environment covers the following: First it is necessary to define which will be the main (master) node that will properly forward the user-defined scripts in the form of jobs to the defined machines in the cluster. There are several different platforms for creating a cluster environment, in this document SLURM Workload Manager will be discussed.&lt;br /&gt;
 &lt;br /&gt;
Advantages of using Slurm as a task environment are: &lt;br /&gt;
* Support for high cluster systems and multiprocessor tasks - The SLURM environment enables the start-up, execution and monitoring of parallel tasks implemented via Message Passing Interface (MPI), on part of the allocated nodes as well as allowing efficient use of resources (nodes) according to a specific policy users, &lt;br /&gt;
* Task profiling - Periodically review each resource assigned to a specific task (CPU runtime, RAM, power consumption, network resources, and disk space usage), &lt;br /&gt;
* Support for MapReduce + algorithm, &lt;br /&gt;
* Support for creating a sequence of tasks, ie one task can be divided into several sub-tasks that are performed in parallel for more efficient use of the given resources, &lt;br /&gt;
* Database integration - where all user parameters and settings are stored, &lt;br /&gt;
* Use of graphic resources to perform tasks - A large number of optional possibilities for additional use of graphic resources given to a specific task / tasks in order to better perform advanced algorithms in the field of machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Services_Installation&amp;quot;&amp;gt;Required installation and predefined environment&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
First during the initial installation, a Linux Ubuntu 20.04 environment was used, with the necessary installation packages. The installation on the server side includes the installation of: &lt;br /&gt;
* Slurmctld - The main process through which the execution and assignment of tasks to the nodes used takes place. The same is used for monitoring the active nodes (machines) in the cluster, &lt;br /&gt;
* Slurmdbd - Process through which the registration of user data, their rules and policies as well as the allowed execution times takes place, &lt;br /&gt;
* Slurmd- Process through which the other children are controlled - Slurm sub-processes and through which the further communication with the other elements of the system takes place.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=120</id>
		<title>Slurm Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=120"/>
		<updated>2021-08-31T10:11:00Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Slurm_Services#Services_Documenting|Documentation for creating SLURM Workload Manager]]&lt;br /&gt;
#[[Slurm_Services#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Services_Documenting&amp;quot;&amp;gt;Documentation for creating SLURM Workload Manager&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Creating a scheduler in a heterogeneous cluster environment covers the following: First it is necessary to define which will be the main (master) node that will properly forward the user-defined scripts in the form of jobs to the defined machines in the cluster. There are several different platforms for creating a cluster environment, in this document SLURM Workload Manager will be discussed.&lt;br /&gt;
 &lt;br /&gt;
Advantages of using Slurm as a task environment are: &lt;br /&gt;
* Support for high cluster systems and multiprocessor tasks - The SLURM environment enables the start-up, execution and monitoring of parallel tasks implemented via Message Passing Interface (MPI), on part of the allocated nodes as well as allowing efficient use of resources (nodes) according to a specific policy users, &lt;br /&gt;
* Task profiling - Periodically review each resource assigned to a specific task (CPU runtime, RAM, power consumption, network resources, and disk space usage), &lt;br /&gt;
* Support for MapReduce + algorithm, &lt;br /&gt;
* Support for creating a sequence of tasks, ie one task can be divided into several sub-tasks that are performed in parallel for more efficient use of the given resources, &lt;br /&gt;
* Database integration - where all user parameters and settings are stored, &lt;br /&gt;
* Use of graphic resources to perform tasks - A large number of optional possibilities for additional use of graphic resources given to a specific task / tasks in order to better perform advanced algorithms in the field of machine learning.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=119</id>
		<title>Slurm Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=119"/>
		<updated>2021-08-31T10:10:46Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Slurm_Services#Services_Documenting|Documentation for creating SLURM Workload Manager]]&lt;br /&gt;
#[[Slurm_Services#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Services_Documenting&amp;quot;&amp;gt;Documentation for creating SLURM Workload Manager&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Creating a scheduler in a heterogeneous cluster environment covers the following: First it is necessary to define which will be the main (master) node that will properly forward the user-defined scripts in the form of jobs to the defined machines in the cluster. There are several different platforms for creating a cluster environment, in this document SLURM Workload Manager will be discussed.&lt;br /&gt;
Advantages of using Slurm as a task environment are: &lt;br /&gt;
* Support for high cluster systems and multiprocessor tasks - The SLURM environment enables the start-up, execution and monitoring of parallel tasks implemented via Message Passing Interface (MPI), on part of the allocated nodes as well as allowing efficient use of resources (nodes) according to a specific policy users, &lt;br /&gt;
* Task profiling - Periodically review each resource assigned to a specific task (CPU runtime, RAM, power consumption, network resources, and disk space usage), &lt;br /&gt;
* Support for MapReduce + algorithm, &lt;br /&gt;
* Support for creating a sequence of tasks, ie one task can be divided into several sub-tasks that are performed in parallel for more efficient use of the given resources, &lt;br /&gt;
* Database integration - where all user parameters and settings are stored, &lt;br /&gt;
* Use of graphic resources to perform tasks - A large number of optional possibilities for additional use of graphic resources given to a specific task / tasks in order to better perform advanced algorithms in the field of machine learning.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=118</id>
		<title>Slurm Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=118"/>
		<updated>2021-08-31T10:10:29Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Slurm_Services#Services_Documenting|Documentation for creating SLURM Workload Manager]]&lt;br /&gt;
#[[Slurm_Services#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Services_Documenting&amp;quot;&amp;gt;Documentation for creating SLURM Workload Manager&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Creating a scheduler in a heterogeneous cluster environment covers the following: First it is necessary to define which will be the main (master) node that will properly forward the user-defined scripts in the form of jobs to the defined machines in the cluster. There are several different platforms for creating a cluster environment, in this document SLURM Workload Manager will be discussed. &lt;br /&gt;
Advantages of using Slurm as a task environment are: &lt;br /&gt;
* Support for high cluster systems and multiprocessor tasks - The SLURM environment enables the start-up, execution and monitoring of parallel tasks implemented via Message Passing Interface (MPI), on part of the allocated nodes as well as allowing efficient use of resources (nodes) according to a specific policy users, &lt;br /&gt;
* Task profiling - Periodically review each resource assigned to a specific task (CPU runtime, RAM, power consumption, network resources, and disk space usage), &lt;br /&gt;
* Support for MapReduce + algorithm, &lt;br /&gt;
* Support for creating a sequence of tasks, ie one task can be divided into several sub-tasks that are performed in parallel for more efficient use of the given resources, &lt;br /&gt;
* Database integration - where all user parameters and settings are stored, &lt;br /&gt;
* Use of graphic resources to perform tasks - A large number of optional possibilities for additional use of graphic resources given to a specific task / tasks in order to better perform advanced algorithms in the field of machine learning.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=117</id>
		<title>Slurm Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Slurm_Services&amp;diff=117"/>
		<updated>2021-08-31T10:07:33Z</updated>

		<summary type="html">&lt;p&gt;Boris: Created page with &amp;quot;Documentation for creating SLURM Workload Manager&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Documentation for creating SLURM Workload Manager&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=116</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=MediaWiki:Sidebar&amp;diff=116"/>
		<updated>2021-08-31T09:28:13Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
** http://www.mediawiki.org|MediaWiki home&lt;br /&gt;
* HPC&lt;br /&gt;
** Technical_Specification|Technical Specification&lt;br /&gt;
** LustreFS|LustreFS&lt;br /&gt;
** SLURM|Initiate and manage SLURM tasks&lt;br /&gt;
** Autentication_Mechanism|Autentication Mechanism&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=115</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=115"/>
		<updated>2021-08-31T09:26:54Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot; |Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Setting|Setting up a user authentication mechanism]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database|Setting up a local database for storing users and policies]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database_Config|Database configuration for storing user information]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Partition|Partition model and division of execution space ]]&lt;br /&gt;
|}&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database&amp;quot;&amp;gt;Setting up a local database for storing users and policies&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Mariadb is used as a database to store user data. In the initial installation, tables should be created for storing data about the users, their properties, the groups they join as well as the permissions or policies that are allowed accordingly. Then the appropriate service is installed to monitor all these processes, SlurmDBD. The main and basic task of this process involves collecting data from multiple clusters at one location for all user activities.  &lt;br /&gt;
&lt;br /&gt;
Each user's UID is used as the main identifier, which can keep detailed statistics for each task that a user will perform. The authentication is done through the munge service, which checks the list of users on each node (reads the contents of Linux file /etc/passwd) and based it stores the information about every user. There are several different tables in which this data is written, the most important of which are: &lt;br /&gt;
&lt;br /&gt;
* AccountingStorageType - Controlling the steps performed by the task as well as the required resources, &lt;br /&gt;
* JobCompType - Writing data about the tasks, which contains basic information such as name, user who started it, allocated nodes and resources, start time, completion time, output status. This table can be expanded with additional information about the databases it uses (MySQL or MariaDB). &lt;br /&gt;
&lt;br /&gt;
Enrollment control is done by the slurmctld service (the main slurm cluster control process). Thus, potential sensitive data that is available to all users in the process should be properly protected and authenticated. The same applies if the data is sent through a network protocol (TCP, UDP) where full protection should be provided throughout the communication channel. The data stored directly in the database are encrypted with an appropriate protocol that provides security and protection in case of possible system abuses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database_Config&amp;quot;&amp;gt;Database configuration for storing user information&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
In addition to the standard way of storing data in a text file (which is the basic way of storing all SLURM data), in our case we use MariaDB database system. It provides high availability and tolerance in case of unwanted hardware or physical problems, which activates the backups of the databases. Newer versions of MySQL and MariaDB use InnoDB as the main table management service. The most important features when configuring InnoDB include the following parameters: &lt;br /&gt;
* Innodb_buffer_pool_size - The size of bytes in the buffer space, which caches all tables and indexes of the data. Depending on the processor architecture (32 or 64 bits) the maximum value in bytes is set. A higher value requires a larger amount of disk I / O operations when accessing the same table multiple times. In our case that value is set to 1024MB, &lt;br /&gt;
* Innodb_lock_wait_timeout - Time expressed in seconds to execute an InnoDB transaction waiting to access a queue locked by another process. In principle, if a certain transaction (INSERT or UPDATE) waits longer than the above defined, then the transaction is canceled and a roll-back is made to the entire operation. In our case the value is set to 900 seconds (i.e. 15 minutes), &lt;br /&gt;
*Innodb_page_size - The size of the data expressed in data files, with a standard size of 16KB. These files are organized as segments, and if a row in a table is larger than the default value, multiple files are combined into a single segment. Smaller sizes are generally recommended for drives with SSD technology for higher performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Partition&amp;quot;&amp;gt;Partition model and division of execution space&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Slurm architecture involves dividing machines (nodes) into larger logical units, which depending on the resources they use (CPU, RAM, GPU) can be grouped to perform multiple common tasks. The configuration in this way is done through the partition model, which builds a set of related elements with common attributes. When performing the task, each step or sub-step of the task can be located on different partitions, depending on the task. &lt;br /&gt;
The user can further optimize resources by parallel running of the task steps when allocating the task configuration. As an example, a task may allocate all nodes declared for that task or several parts of the task independently use only a small portion of the resource allocation. Such an example is shown in fig. 2, by a division into two main partitions for each different task.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=114</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=114"/>
		<updated>2021-08-31T09:26:40Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot; |Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Setting|Setting up a user authentication mechanism]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database|Setting up a local database for storing users and policies]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database_Config|Database configuration for storing user information]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Partition|Partition model and division of execution space ]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database&amp;quot;&amp;gt;Setting up a local database for storing users and policies&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Mariadb is used as a database to store user data. In the initial installation, tables should be created for storing data about the users, their properties, the groups they join as well as the permissions or policies that are allowed accordingly. Then the appropriate service is installed to monitor all these processes, SlurmDBD. The main and basic task of this process involves collecting data from multiple clusters at one location for all user activities.  &lt;br /&gt;
&lt;br /&gt;
Each user's UID is used as the main identifier, which can keep detailed statistics for each task that a user will perform. The authentication is done through the munge service, which checks the list of users on each node (reads the contents of Linux file /etc/passwd) and based it stores the information about every user. There are several different tables in which this data is written, the most important of which are: &lt;br /&gt;
&lt;br /&gt;
* AccountingStorageType - Controlling the steps performed by the task as well as the required resources, &lt;br /&gt;
* JobCompType - Writing data about the tasks, which contains basic information such as name, user who started it, allocated nodes and resources, start time, completion time, output status. This table can be expanded with additional information about the databases it uses (MySQL or MariaDB). &lt;br /&gt;
&lt;br /&gt;
Enrollment control is done by the slurmctld service (the main slurm cluster control process). Thus, potential sensitive data that is available to all users in the process should be properly protected and authenticated. The same applies if the data is sent through a network protocol (TCP, UDP) where full protection should be provided throughout the communication channel. The data stored directly in the database are encrypted with an appropriate protocol that provides security and protection in case of possible system abuses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database_Config&amp;quot;&amp;gt;Database configuration for storing user information&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
In addition to the standard way of storing data in a text file (which is the basic way of storing all SLURM data), in our case we use MariaDB database system. It provides high availability and tolerance in case of unwanted hardware or physical problems, which activates the backups of the databases. Newer versions of MySQL and MariaDB use InnoDB as the main table management service. The most important features when configuring InnoDB include the following parameters: &lt;br /&gt;
* Innodb_buffer_pool_size - The size of bytes in the buffer space, which caches all tables and indexes of the data. Depending on the processor architecture (32 or 64 bits) the maximum value in bytes is set. A higher value requires a larger amount of disk I / O operations when accessing the same table multiple times. In our case that value is set to 1024MB, &lt;br /&gt;
* Innodb_lock_wait_timeout - Time expressed in seconds to execute an InnoDB transaction waiting to access a queue locked by another process. In principle, if a certain transaction (INSERT or UPDATE) waits longer than the above defined, then the transaction is canceled and a roll-back is made to the entire operation. In our case the value is set to 900 seconds (i.e. 15 minutes), &lt;br /&gt;
*Innodb_page_size - The size of the data expressed in data files, with a standard size of 16KB. These files are organized as segments, and if a row in a table is larger than the default value, multiple files are combined into a single segment. Smaller sizes are generally recommended for drives with SSD technology for higher performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Partition&amp;quot;&amp;gt;Partition model and division of execution space&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Slurm architecture involves dividing machines (nodes) into larger logical units, which depending on the resources they use (CPU, RAM, GPU) can be grouped to perform multiple common tasks. The configuration in this way is done through the partition model, which builds a set of related elements with common attributes. When performing the task, each step or sub-step of the task can be located on different partitions, depending on the task. &lt;br /&gt;
The user can further optimize resources by parallel running of the task steps when allocating the task configuration. As an example, a task may allocate all nodes declared for that task or several parts of the task independently use only a small portion of the resource allocation. Such an example is shown in fig. 2, by a division into two main partitions for each different task.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=113</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=113"/>
		<updated>2021-08-31T09:24:36Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot; |Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Setting|Setting up a user authentication mechanism]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database|Setting up a local database for storing users and policies]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database_Config|Database configuration for storing user information]]&lt;br /&gt;
#[[Autentication_Mechanism#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database&amp;quot;&amp;gt;Setting up a local database for storing users and policies&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Mariadb is used as a database to store user data. In the initial installation, tables should be created for storing data about the users, their properties, the groups they join as well as the permissions or policies that are allowed accordingly. Then the appropriate service is installed to monitor all these processes, SlurmDBD. The main and basic task of this process involves collecting data from multiple clusters at one location for all user activities.  &lt;br /&gt;
&lt;br /&gt;
Each user's UID is used as the main identifier, which can keep detailed statistics for each task that a user will perform. The authentication is done through the munge service, which checks the list of users on each node (reads the contents of Linux file /etc/passwd) and based it stores the information about every user. There are several different tables in which this data is written, the most important of which are: &lt;br /&gt;
&lt;br /&gt;
* AccountingStorageType - Controlling the steps performed by the task as well as the required resources, &lt;br /&gt;
* JobCompType - Writing data about the tasks, which contains basic information such as name, user who started it, allocated nodes and resources, start time, completion time, output status. This table can be expanded with additional information about the databases it uses (MySQL or MariaDB). &lt;br /&gt;
&lt;br /&gt;
Enrollment control is done by the slurmctld service (the main slurm cluster control process). Thus, potential sensitive data that is available to all users in the process should be properly protected and authenticated. The same applies if the data is sent through a network protocol (TCP, UDP) where full protection should be provided throughout the communication channel. The data stored directly in the database are encrypted with an appropriate protocol that provides security and protection in case of possible system abuses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database_Config&amp;quot;&amp;gt;Database configuration for storing user information&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
In addition to the standard way of storing data in a text file (which is the basic way of storing all SLURM data), in our case we use MariaDB database system. It provides high availability and tolerance in case of unwanted hardware or physical problems, which activates the backups of the databases. Newer versions of MySQL and MariaDB use InnoDB as the main table management service. The most important features when configuring InnoDB include the following parameters: &lt;br /&gt;
* Innodb_buffer_pool_size - The size of bytes in the buffer space, which caches all tables and indexes of the data. Depending on the processor architecture (32 or 64 bits) the maximum value in bytes is set. A higher value requires a larger amount of disk I / O operations when accessing the same table multiple times. In our case that value is set to 1024MB, &lt;br /&gt;
* Innodb_lock_wait_timeout - Time expressed in seconds to execute an InnoDB transaction waiting to access a queue locked by another process. In principle, if a certain transaction (INSERT or UPDATE) waits longer than the above defined, then the transaction is canceled and a roll-back is made to the entire operation. In our case the value is set to 900 seconds (i.e. 15 minutes), &lt;br /&gt;
*Innodb_page_size - The size of the data expressed in data files, with a standard size of 16KB. These files are organized as segments, and if a row in a table is larger than the default value, multiple files are combined into a single segment. Smaller sizes are generally recommended for drives with SSD technology for higher performance.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=112</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=112"/>
		<updated>2021-08-31T09:23:42Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot; |Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Setting|Setting up a user authentication mechanism]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database|Setting up a local database for storing users and policies]]&lt;br /&gt;
#[[Autentication_Mechanism#Autentication_Database_Config|Database configuration for storing user information]]&lt;br /&gt;
#[[Autentication_Mechanism#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database&amp;quot;&amp;gt;Setting up a local database for storing users and policies&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
Mariadb is used as a database to store user data. In the initial installation, tables should be created for storing data about the users, their properties, the groups they join as well as the permissions or policies that are allowed accordingly. Then the appropriate service is installed to monitor all these processes, SlurmDBD. The main and basic task of this process involves collecting data from multiple clusters at one location for all user activities.  &lt;br /&gt;
&lt;br /&gt;
Each user's UID is used as the main identifier, which can keep detailed statistics for each task that a user will perform. The authentication is done through the munge service, which checks the list of users on each node (reads the contents of Linux file /etc/passwd) and based it stores the information about every user. There are several different tables in which this data is written, the most important of which are: &lt;br /&gt;
&lt;br /&gt;
* AccountingStorageType - Controlling the steps performed by the task as well as the required resources, &lt;br /&gt;
* JobCompType - Writing data about the tasks, which contains basic information such as name, user who started it, allocated nodes and resources, start time, completion time, output status. This table can be expanded with additional information about the databases it uses (MySQL or MariaDB). &lt;br /&gt;
&lt;br /&gt;
Enrollment control is done by the slurmctld service (the main slurm cluster control process). Thus, potential sensitive data that is available to all users in the process should be properly protected and authenticated. The same applies if the data is sent through a network protocol (TCP, UDP) where full protection should be provided throughout the communication channel. The data stored directly in the database are encrypted with an appropriate protocol that provides security and protection in case of possible system abuses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Autentication_Database_Config&amp;quot;&amp;gt;Database configuration for storing user information&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
In addition to the standard way of storing data in a text file (which is the basic way of storing all SLURM data), in our case we use MariaDB database system. It provides high availability and tolerance in case of unwanted hardware or physical problems, which activates the backups of the databases. Newer versions of MySQL and MariaDB use InnoDB as the main table management service. The most important features when configuring InnoDB include the following parameters: &lt;br /&gt;
* Innodb_buffer_pool_size - The size of bytes in the buffer space, which caches all tables and indexes of the data. Depending on the processor architecture (32 or 64 bits) the maximum value in bytes is set. A higher value requires a larger amount of disk I / O operations when accessing the same table multiple times. In our case that value is set to 1024MB, &lt;br /&gt;
* Innodb_lock_wait_timeout - Time expressed in seconds to execute an InnoDB transaction waiting to access a queue locked by another process. In principle, if a certain transaction (INSERT or UPDATE) waits longer than the above defined, then the transaction is canceled and a roll-back is made to the entire operation. In our case the value is set to 900 seconds (i.e. 15 minutes), &lt;br /&gt;
*Innodb_page_size - The size of the data expressed in data files, with a standard size of 16KB. These files are organized as segments, and if a row in a table is larger than the default value, multiple files are combined into a single segment. Smaller sizes are generally recommended for drives with SSD technology for higher performance.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=111</id>
		<title>Authentication Mechanism</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Authentication_Mechanism&amp;diff=111"/>
		<updated>2021-08-31T09:18:32Z</updated>

		<summary type="html">&lt;p&gt;Boris: Created page with &amp;quot;'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''  For greater security in the communication of the master server with the clients, it is ne...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''&amp;lt;h1 id=&amp;quot;Autentication_Setting&amp;quot;&amp;gt;Setting up a user authentication mechanism&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
For greater security in the communication of the master server with the clients, it is necessary to set up two-way authentication with the help of a shared key. According to standard practice, Slurm uses the munge protocol to validate the mutual communication of system components. In short, it is designed for large HPC environments, in which the UID and GID identifiers of users and groups are validated with a shared cryptographic key. It typically uses a 128 AES encryption scheme as well as a SHA-256 hash validation message. The generated key is then copied to all nodes to be used in the cluster along with the corresponding permissions. It should also be noted that users who will use Slurm should have the same UID and GUID everywhere, for better security and further authentication when sending tasks.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=110</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=110"/>
		<updated>2021-08-30T11:53:26Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
#[[SLURM#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
#[[SLURM#Slurm_Check|Checking the status of the job]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Check&amp;quot;&amp;gt;Checking the status of the job&amp;lt;/h1&amp;gt;'''The status of the job can be checked via the &amp;quot;squeue&amp;quot; command which shows us the following information:&lt;br /&gt;
&lt;br /&gt;
* '''JOB ID'''&lt;br /&gt;
* '''Partition''' – Partition of the task&lt;br /&gt;
* '''Name''' – Name of the task&lt;br /&gt;
* '''USER''' – Name of the user performing the task&lt;br /&gt;
* '''ST''' – Job status (most common are PD - Pending, R - Running, S - Suspended, CG - Completing, CD - Completed)&lt;br /&gt;
* '''NODES''' – Number of nodes associated with the task&lt;br /&gt;
* '''TIME''' – Time elapsed for task completion&lt;br /&gt;
* '''NODELIST (REASON)''' – Indicates where the task is being performed or why it is still waiting.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=109</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=109"/>
		<updated>2021-08-30T11:52:13Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
#[[SLURM#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Check&amp;quot;&amp;gt;Checking the status of the job&amp;lt;/h1&amp;gt;'''The status of the job can be checked via the &amp;quot;squeue&amp;quot; command which shows us the following information:&lt;br /&gt;
&lt;br /&gt;
* '''JOB ID'''&lt;br /&gt;
* '''Partition''' – Partition of the task&lt;br /&gt;
* '''Name''' – Name of the task&lt;br /&gt;
* '''USER''' – Name of the user performing the task&lt;br /&gt;
* '''ST''' – Job status (most common are PD - Pending, R - Running, S - Suspended, CG - Completing, CD - Completed)&lt;br /&gt;
* '''NODES''' – Number of nodes associated with the task&lt;br /&gt;
* '''TIME''' – Time elapsed for task completion&lt;br /&gt;
* '''NODELIST (REASON)''' – Indicates where the task is being performed or why it is still waiting.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=108</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=108"/>
		<updated>2021-08-30T11:50:34Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
#[[SLURM#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Check&amp;quot;&amp;gt;Checking the status of the job&amp;lt;/h1&amp;gt;'''The status of the job can be checked via the &amp;quot;squeue&amp;quot; command which shows us the following information:&lt;br /&gt;
&lt;br /&gt;
* JOB ID&lt;br /&gt;
* Partition – Partition of the task&lt;br /&gt;
* Name – Name of the task&lt;br /&gt;
* USER – Name of the user performing the task&lt;br /&gt;
* ST – Job status (most common are PD - Pending, R - Running, S - Suspended, CG - Completing, CD - Completed)&lt;br /&gt;
* NODES – Number of nodes associated with the task&lt;br /&gt;
* TIME – Time elapsed for task completion&lt;br /&gt;
* NODELIST (REASON) – Indicates where the task is being performed or why it is still waiting.&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=107</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=107"/>
		<updated>2021-08-30T11:18:40Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
#[[SLURM#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Check&amp;quot;&amp;gt;Checking the status of the job&amp;lt;/h1&amp;gt;'''&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=106</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=106"/>
		<updated>2021-08-30T11:16:12Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
#[[SLURM#Slurm_ExamplesGPU|Examples with GPU memory selection]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=105</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=105"/>
		<updated>2021-08-30T11:15:16Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=104</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=104"/>
		<updated>2021-08-30T11:14:44Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 32 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 48 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 96 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=103</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=103"/>
		<updated>2021-08-30T11:11:19Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;&amp;lt;b&amp;gt;Example with 16 GB GPU:&amp;lt;/b&amp;gt;&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=102</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=102"/>
		<updated>2021-08-30T11:08:28Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3&amp;gt;Example with 16 GB GPU:&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=101</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=101"/>
		<updated>2021-08-30T11:06:42Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h3 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h3&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1&amp;gt;Example with 16 GB GPU:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=100</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=100"/>
		<updated>2021-08-30T11:06:13Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1&amp;gt;Example with 16 GB GPU:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=99</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=99"/>
		<updated>2021-08-30T11:05:20Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''Example with 16 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda1'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:1'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --gres=gpu:2'''&lt;br /&gt;
&lt;br /&gt;
'''#SBATCH --nodelist=cuda4'''&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=98</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=98"/>
		<updated>2021-08-30T11:03:32Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''Example with 16 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=97</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=97"/>
		<updated>2021-08-30T11:02:49Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExamplesGPU&amp;quot;&amp;gt;Examples with GPU memory selection&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''Example with 16 GB GPU:'''&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks-per-node=2 &lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --job-name=test_job&lt;br /&gt;
#SBATCH --mem=1G&lt;br /&gt;
#SBATCH --error=testerror_%j.error&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --output=testoutput_%j.out&lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
#SBATCH --nodelist=cuda1&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''Example with 32 GB GPU:'''&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks-per-node=2 &lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --job-name=test_job&lt;br /&gt;
#SBATCH --mem=1G&lt;br /&gt;
#SBATCH --error=testerror_%j.error&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --output=testoutput_%j.out&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
#SBATCH --nodelist=cuda1&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''Example with 48 GB GPU:'''&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks-per-node=2 &lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --job-name=test_job&lt;br /&gt;
#SBATCH --mem=1G&lt;br /&gt;
#SBATCH --error=testerror_%j.error&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --output=testoutput_%j.out&lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
#SBATCH --nodelist=cuda4&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''Example with 96 GB GPU:'''&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --ntasks-per-node=2 &lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --job-name=test_job&lt;br /&gt;
#SBATCH --mem=1G&lt;br /&gt;
#SBATCH --error=testerror_%j.error&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --output=testoutput_%j.out&lt;br /&gt;
#SBATCH --gres=gpu:2&lt;br /&gt;
#SBATCH --nodelist=cuda4&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=96</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=96"/>
		<updated>2021-08-30T10:58:17Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
#[[SLURM#Slurm_GPUmemory|GPU memory selection options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Technical_Specification&amp;diff=95</id>
		<title>Technical Specification</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Technical_Specification&amp;diff=95"/>
		<updated>2021-08-30T10:57:18Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Performance and installed versions&lt;br /&gt;
&lt;br /&gt;
There are 4 CUDA servers with the following graphical cards installed on them that can be accessed via Slurm Workload Manager:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|'''Host'''&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|'''GPU'''&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|'''OS'''&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|gpu.finki.ukim.mk&lt;br /&gt;
|/&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda1&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda2&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda3&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda4&lt;br /&gt;
|#1: Quadro RTX 8000 48GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 8000 48GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Technical_Specification&amp;diff=94</id>
		<title>Technical Specification</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Technical_Specification&amp;diff=94"/>
		<updated>2021-08-30T10:57:04Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Performance and installed versions&lt;br /&gt;
&lt;br /&gt;
There are 4 CUDA servers with the following graphical cards installed on them that can be accessed via Slurm Workload Manager:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|'''Host'''&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|'''GPU'''&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|'''OS'''&lt;br /&gt;
|-&lt;br /&gt;
!gpu.finki.ukim.mk&lt;br /&gt;
|/&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda1&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda2&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda3&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|cuda4&lt;br /&gt;
|#1: Quadro RTX 8000 48GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 8000 48GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=93</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=93"/>
		<updated>2021-08-30T10:56:14Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|GPU Memory&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=92</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=92"/>
		<updated>2021-08-30T10:54:51Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!GPU Memory&lt;br /&gt;
!Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=91</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=91"/>
		<updated>2021-08-30T10:54:19Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!GPU Memory&lt;br /&gt;
!Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=Technical_Specification&amp;diff=90</id>
		<title>Technical Specification</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=Technical_Specification&amp;diff=90"/>
		<updated>2021-08-30T10:53:48Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Performance and installed versions&lt;br /&gt;
&lt;br /&gt;
There are 4 CUDA servers with the following graphical cards installed on them that can be accessed via Slurm Workload Manager:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
!'''Host'''&lt;br /&gt;
!'''GPU'''&lt;br /&gt;
!'''OS'''&lt;br /&gt;
|-&lt;br /&gt;
!gpu.finki.ukim.mk&lt;br /&gt;
|/&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!cuda1&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!cuda2&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!cuda3&lt;br /&gt;
|#1: Quadro RTX 5000 16GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 5000 16GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|-&lt;br /&gt;
!cuda4&lt;br /&gt;
|#1: Quadro RTX 8000 48GB&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;2: Quadro RTX 8000 48GB&lt;br /&gt;
|Ubuntu 18.04 / Slurm 17.11.2&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=89</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=89"/>
		<updated>2021-08-30T10:48:22Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto; background-color:#ffffff;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!GPU Memory&lt;br /&gt;
!Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=88</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=88"/>
		<updated>2021-08-30T10:45:08Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin-left: auto; margin-right: auto;&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!GPU Memory&lt;br /&gt;
!Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=87</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=87"/>
		<updated>2021-08-30T10:44:25Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+&lt;br /&gt;
!GPU Memory&lt;br /&gt;
!Code for the script&lt;br /&gt;
|-&lt;br /&gt;
|16 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|32 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda1 (or cuda2 or cuda3)&lt;br /&gt;
|-&lt;br /&gt;
|48 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|-&lt;br /&gt;
|96 GB GDDR6&lt;br /&gt;
|#SBATCH --gres=gpu:2&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=86</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=86"/>
		<updated>2021-08-30T10:40:54Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=85</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=85"/>
		<updated>2021-08-30T10:40:30Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script'''&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=84</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=84"/>
		<updated>2021-08-30T10:39:36Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;'''Example by executing a simple script'''&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h1 id=&amp;quot;Slurm_GPUmemory&amp;quot;&amp;gt;'''GPU memory selection options&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
There are 4 options for selecting GPU memory and this can be done by combining some of the commands in the script&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=83</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=83"/>
		<updated>2021-08-30T10:37:12Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;'''Example by executing a simple script'''&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=82</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=82"/>
		<updated>2021-08-30T10:36:55Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;'''Example by executing a simple script'''&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExampleParameters&amp;quot;&amp;gt;Example parameters intended for GPU&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=81</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=81"/>
		<updated>2021-08-30T10:36:41Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;'''Example by executing a simple script'''&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExampleParameters&amp;quot;&amp;gt;Example parameters intended for GPU&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=80</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=80"/>
		<updated>2021-08-30T10:32:37Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExampleParameters&amp;quot;&amp;gt;Example parameters intended for GPU&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=79</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=79"/>
		<updated>2021-08-30T10:31:16Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_ExampleParameters&amp;quot;&amp;gt;Example parameters intended for GPU&amp;lt;/h1&amp;gt;'''&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
	<entry>
		<id>https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=78</id>
		<title>SLURM</title>
		<link rel="alternate" type="text/html" href="https://wiki.hpc.mk/index.php?title=SLURM&amp;diff=78"/>
		<updated>2021-08-30T10:30:42Z</updated>

		<summary type="html">&lt;p&gt;Boris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Initiate and manage SLURM tasks ==&lt;br /&gt;
{|class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Contents&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
#[[SLURM#Slurm_Parameters|Most used parameters]]&lt;br /&gt;
#[[SLURM#Slurm_Example|Example by executing a simple script]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Parameters&amp;quot;&amp;gt;Most used parameters:&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;width: 100%; background-color:#ffffff; border-width: 0px&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!style=&amp;quot;text-align:left; background-color:#F1EDEC&amp;quot;|Parameters!!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --ntasks-per-node=2 ||# Number of tasks per phisical CPU core&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00||# Script duration (days-hrs:min:sec)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job||	# Job name&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --mem=1G||# Ram memory for rendering (e.g. 1G, 2G, 4G)&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error||# Print the errors that occur when executing the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --cpus-per-task=1||# Number of processors required for a single task&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out||# Print the results from scripts and the values it returns&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --gres=gpu:2||# Number of cards per one nod allocated for the job&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --nodelist=cuda4	||	# Executing on specific nodes, e.g. cuda4 is for executing only on cuda4 host&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;h1 id=&amp;quot;Slurm_Example&amp;quot;&amp;gt;Example by executing a simple script&amp;lt;/h1&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;!/bin/bash&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --job-name=test_job&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH –-ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --error=testerror_%j.error&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;#&amp;lt;/nowiki&amp;gt;SBATCH --output=testoutput_%j.out&lt;br /&gt;
&lt;br /&gt;
export PATH=&amp;quot;/opt/anaconda3/bin:$PATH&amp;quot;&lt;br /&gt;
&lt;br /&gt;
source /opt/anaconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
conda create -n virtualenv python=3.8&lt;br /&gt;
&lt;br /&gt;
conda activate virtualenv&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;FINKI FCC&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The script is executed via sbatch &amp;lt;scriptname&amp;gt;.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example parameters intended for GPU&lt;/div&gt;</summary>
		<author><name>Boris</name></author>
	</entry>
</feed>