Accessing the Cluster¶
The Anselm cluster is accessed by SSH protocol via login nodes login1 and login2 at the address anselm.it4i.cz. The login nodes may be addressed specifically, by prepending the login node name to the address.
|Login address||Port||Protocol||Login node|
|anselm.it4i.cz||22||ssh||round-robin DNS record for login1 and login2|
Authentication is by private key
Please verify SSH fingerprints during the first logon. They are identical on all login nodes:
29:b3:f4:64:b0:73:f5:6f:a7:85:0f:e0:0d:be:76:bf (DSA) d4:6f:5c:18:f4:3f:70:ef:bc:fc:cc:2b:fd:13:36:b7 (RSA)
LX2034TYy6Lf0Q7Zf3zOIZuFlG09DaSGROGBz6LBUy4 (DSA) +DcED3GDoA9piuyvQOho+ltNvwB9SJSYXbB639hbejY (RSA)
Private key authentication:
On Linux or Mac, use:
$ ssh -i /path/to/id_rsa email@example.com
If you see a warning message "UNPROTECTED PRIVATE KEY FILE!", use this command to set lower permissions to the private key file:
$ chmod 600 /path/to/id_rsa
On Windows, use PuTTY ssh client.
After logging in, you will see the command prompt:
_ /\ | | / \ _ __ ___ ___| |_ __ ___ / /\ \ | '_ \/ __|/ _ \ | '_ ` _ \ / ____ \| | | \__ \ __/ | | | | | | /_/ \_\_| |_|___/\___|_|_| |_| |_| http://www.it4i.cz/?lang=en Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com [firstname.lastname@example.org ~]$
Example to the cluster login:
The environment is not shared between login nodes, except for shared filesystems.
Data in and out of the system may be transferred by the scp and sftp protocols. (Not available yet). In the case that large volumes of data are transferred, use the dedicated data mover node dm1.anselm.it4i.cz for increased performance.
Authentication is by private key
Data transfer rates of up to 160MB/s can be achieved with scp or sftp.
1TB may be transferred in 1:50h.
To achieve 160MB/s transfer rates, the end user must be connected by 10G line all the way to IT4Innovations, and be using a computer with a fast processor for the transfer. When using a Gigabit ethernet connection, up to 110MB/s transfer rates may be expected. Fast cipher (aes128-ctr) should be used.
If you experience degraded data transfer performance, consult your local network provider.
On linux or Mac, use an scp or sftp client to transfer data to Anselm:
$ scp -i /path/to/id_rsa my-local-file email@example.com:directory/file
$ scp -i /path/to/id_rsa -r my-local-dir firstname.lastname@example.org:directory
$ sftp -o IdentityFile=/path/to/id_rsa email@example.com
A very convenient way to transfer files in and out of Anselm is via the fuse filesystem sshfs
$ sshfs -o IdentityFile=/path/to/id_rsa firstname.lastname@example.org:. mountpoint
Using sshfs, the users Anselm home directory will be mounted on your local computer, just like an external disk.
Learn more about ssh, scp and sshfs by reading the manpages
$ man ssh $ man scp $ man sshfs
More information about the shared file systems is available here.
Outgoing connections, from Anselm Cluster login nodes to the outside world, are restricted to the following ports:
Please use ssh port forwarding and proxy servers to connect from Anselm to all other remote ports.
Outgoing connections, from Anselm Cluster compute nodes are restricted to the internal network. Direct connections form compute nodes to the outside world are cut.
Port Forwarding From Login Nodes¶
Port forwarding allows an application running on Anselm to connect to arbitrary remote hosts and ports.
It works by tunneling the connection from Anselm back to users' workstations and forwarding from the workstation to the remote host.
Pick some unused port on the Anselm login node (for example 6000) and establish the port forwarding:
$ ssh -R 6000:remote.host.com:1234 anselm.it4i.cz
In this example, we establish port forwarding between port 6000 on Anselm and port 1234 on the remote.host.com. By accessing localhost:6000 on Anselm, an application will see the response of remote.host.com:1234. The traffic will run via the user's local workstation.
Port forwarding may be done using PuTTY as well. On the PuTTY Configuration screen, load your Anselm configuration first. Then go to Connection->SSH->Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to theSource port textbox. Insert remote.host.com:1234. Click the Add button, then Open.
Port forwarding may be established directly to the remote host. However, this requires that the user has ssh access to remote.host.com
$ ssh -L 6000:localhost:1234 remote.host.com
Port number 6000 is chosen as an example only. Pick any free port.
Port Forwarding From Compute Nodes¶
Remote port forwarding from compute nodes allows applications running on the compute nodes to access hosts outside the Anselm Cluster.
First, establish the remote port forwarding form the login node, as described above.
Second, invoke port forwarding from the compute node to the login node. Insert the following line into your jobscript or interactive shell;
$ ssh -TN -f -L 6000:localhost:6000 login1
In this example, we assume that port forwarding from login1:6000 to remote.host.com:1234 has been established beforehand. By accessing localhost:6000, an application running on a compute node will see the response of remote.host.com:1234
Using Proxy Servers¶
Port forwarding is static, each single port is mapped to a particular port on a remote host. Connection to another remote host requires a new forward.
Applications with inbuilt proxy support experience unlimited access to remote hosts via a single proxy server.
To establish a local proxy server on your workstation, install and run SOCKS proxy server software. On Linux, sshd demon provides the functionality. To establish SOCKS proxy server listening on port 1080 run:
$ ssh -D 1080 localhost
On Windows, install and run the free, open source Sock Puppet server.
Once the proxy server is running, establish ssh port forwarding from Anselm to the proxy server, port 1080, exactly as described above:
$ ssh -R 6000:localhost:1080 anselm.it4i.cz
Now, configure the applications proxy settings to localhost:6000. Use port forwarding to access the proxy server from compute nodes as well.
Graphical User Interface¶
- The X Window system is the principal way to get GUI access to the clusters.
- Virtual Network Computing is a graphical desktop sharing system that uses the Remote Frame Buffer protocol to remotely control another computer.
- Access IT4Innovations internal resources via VPN.