LSF:Batch Job Submission
Once you have your job file ready, it is time to submit your job. You can submit your job to LSF with the following command:
[ NetID@ ~]$ bsub < MyJob.LSF Verifying job submission parameters... Verifying project account... Account to charge: 123456789123 Balance (SUs): 5000.0000 SUs to charge: 5.0000 Job <12345> is submitted to default queue <sn_regular>.
After a job has been submitted, you may want to check on its progress or cancel it. Below is a list of the most used job monitoring and control commands for jobs on Ada and Curie.
|Submit a job||bsub < [script_file]||bsub < MyJob.LSF|
|Cancel/Kill a job||bkill [Job_ID]||bkill 101204|
|Check summary status of a single job||bjobs [job_id]||bjobs 101204|
|Check summary status of all
jobs for a user
|bjobs -u [user_name]||bjobs -u adaUser1|
|Check detailed status of a single job||bjobs -l [job_id]||bjobs -l 101204|
|Modify job submission options||bmod [bsub_options] [job_id]||bmod -W 2:00 101204|
For more information on any of the commands above, please see their respective man pages.
[ NetID@ ~]$ man [command]
tamulauncher provides a convenient way to run a large number of serial or multithreaded commands without the need to submit individual jobs or a Job array. User provides a text file containing all commands that need to be executed and tamulauncher will execute the commands concurrently. The number of concurrently executed commands depends on the batch requirements. When tamulauncher is run interactively the number of concurrently executed commands is limited to at most 8. tamulauncher is available on terra, ada, and curie. There is no need to load any module before using tamulauncher. tamulauncher has been successfully tested to execute over 100K commands.
tamulauncher is preferred over Job Arrays to submit a large number of individual jobs, especially when the run times of the commands are relatively short. It allows for better utilization of the nodes, puts less burden on the batch scheduler, and lessens interference with jobs of other users on the same node.
For more information, visit this page.