Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
- Homepage: https://www.keras.io/
Keras is open to all HPRC users.
Anaconda and Keras Packages
TAMU HPRC currently supports the user of Keras though the Anaconda modules. There are a variety of Anaconda modules available on Ada and Terra.
While several versions of Anaconda have some Keras environments installed, it is simplest to use exactly the versions in the following sections.
You can learn more about the module system on our SW:Modules page.
You can explore the available Anaconda environments on a per-module basis using the following:
[NetID@ada ~]$ module load Anaconda/[SomeVersion] [NetID@ada ~]$ conda info --envs
Pytorch on Ada (CPU-only)
A single version of TensorFlow is currently available on Ada. This version is limited to the CPU only (no GPU).
To load this version (python 3.6):
[NetID@ada ~]$ module load Anaconda/3-184.108.40.206 [NetID@ada ~]$ source activate pytorch-0.2.0 [NetID@ada ~]$ [run your Python program accessing Pytorch] [NetID@ada ~]$ source deactivate
This version can be run on any of the 64GB or 256GB compute nodes.
TensorFlow on Terra (GPU-only)
There is one version of Pytorch (0.1.12) using GPUs on Terra for module Anaconda/3-220.127.116.11. Your program using GPUs should run on GPU nodes.
To load pytorch-gpu-0.1.12 (python 3.6.2):
[NetID@terra ~]$ module load Anaconda/3-18.104.22.168 [NetID@terra ~]$ source activate pytorch-gpu-0.1.12 [NetID@terra ~]$ [run your Python program accessing Pytorch] [NetID@terra ~]$ source deactivate
Example Pytorch Script
As with any job on the system, Pytorch should be used via the submission of a job file. Scripts using Pytorch are written in Python, and thus Pytorch scripts should not be written directly inside a job file or entered in the shell line by line. Instead, a separate file for the Python/Pytorch script should be created, which can then be executed by the job file.
To create a new script file, simply open up the text editor of your choice.
Below is an example script (entered in the text editor of your choice) from http://pytorch.org/tutorials/beginner/pytorch_with_examples.html:
import torch dtype = torch.FloatTensor # dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random input and output data x = torch.randn(N, D_in).type(dtype) y = torch.randn(N, D_out).type(dtype) # Randomly initialize weights w1 = torch.randn(D_in, H).type(dtype) w2 = torch.randn(H, D_out).type(dtype) learning_rate = 1e-6 for t in range(500): # Forward pass: compute predicted y h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) # Compute and print loss loss = (y_pred - y).pow(2).sum() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() grad_h[h < 0] = 0 grad_w1 = x.t().mm(grad_h) # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2
It is recommended to save this script with a .py file extension, but not necessary.
Once saved, the script can be tested on a login node by entering:
[NetID@terra ~]$ python testscript.py