Batch schedulers – A comparative review
by
Shawn M. Gordon
President
S.M.Gordon & Associates
Introduction
Recently I was fortunate(?) enough to get to review a plethora of batch schedulers for installation at my site. I decided to limit my choices to five different schedulers so I would be done by the end of 1992. I will try to cover all schedulers in each section and give what I considered to be the strengths and weakness’s of each.
All work was done on a single HP 3000 series 70 running MPE release 21 of Platform 1P. The system has 12 meg of main memory and 3 gig of disk space. There are 50 users at any given time on the system, but they are usually only on between 7am and 7pm, the rest of the night is given to backups and batch jobs. We only do a minimum of batch processing during the day, but user’s can stream some jobs on their own, on an average night we will stream about 30 jobs, but during month end or accounting period ends it can be closer to 250. I don’t have a network, but I will note which products support them, I cannot however test their capability in this respect.
The products reviewed were Maestro from Unison, Express from OCS, JMS/3000 from Design/3000, Masterop from Kemp software and Jobtime/3000 from NSD. I didn’t get Emperor from Carolian because they charged for the demo and as a rule I won’t pay for a demo. I was very interested to see how the biggies in the data center solution game (UNISON, OCS) fared against each other and how a couple of relatively unknown products would do against the giants.
There are certain things that were inherent in all the products so it doesn’t make sense to talk about them one by one. The vendors all implemented these things differently so where it is pertinent I will show examples of syntax so you can judge what is easiest.
All the schedulers supported job dependencies, i.e., job B is streamed after job A but not until job A completes, you could then go on to specify that it had to finish successfully if you chose. Specification of dates and times for scheduling a job, and job queue management. And they all need to have at least one background job executing to handle the whole shebang.
Features
OCS Express: Express has kind of an odd mixture of command driven and screen driven interface. Everyone else was either one or the other. Typically you are in the Express program and you have a > prompt and then you enter a command, this command will either run a program which is a full screen interface that prompts for information, or it is a command to the Express program.
Express does support network dependencies for job schedules, which is a new feature for Express. They also include a STDLIST analysis facility. This allows you to specify a string of text that is to be scanned for as well as various dispositions should one of your exceptions occur. This includes altering spool priority, deleting the spoolfile, and determining if it is appropriate to stream the job that is dependent on this one depending on how it was defined. This is also a new feature to Express.
The way STDLIST analysis is implemented is rather interesting, you actually define error logic checking blocks and then assign these blocks to your different jobs. This is rather handy since you will probably have a limited number of conditions that you want to check for. You can apply multiple criteria to a single job if you wish to make more extensive checking. It will read all spoolfiles that are in either a READY or LOCKED state so you need to have all your spoolfile output with a low priority so it won’t print automatically.
Dependencies can be to a time of day that must pass before the job is scheduled. OCS refers to dependencies as ‘prerequisite’, but for consistency I am going to stick with dependency; a job or jobs that must complete before the job is scheduled, a job schedule that must complete, the availability of a resource such as a tape drive or exclusive access to a data base, priority of input jobs so jobs with a higher priority will go first if no other dependency takes precedence. Finally you have what OCS calls ‘runbook’ dependencies. I want to talk about the runbook concept for a minute.
The runbook is very badly documented and it took me a little while to figure out how it worked. Once I understood how it worked I realized that it was too easy, which was why I got confused. Basically a runbook is a user defined number like RB1000, a message or title to associate with it and an action. This action can be requiring an operator response before letting it go or just a notification to the operator. You can specify that a response to this runbook satisfies the dependency for this job only, not to any job dependencies. This may seem kind of basic at first, but it gives you a way to define a block within your schedule that will require human intervention, sometimes that could be very helpful.
You can then display the runbooks and reply to them where applicable, a typical display and response would look like this.
>SHOW R REQ STATUS RUNBOOK TITLE JOBNAME --- ------ ------- ----- ------- 1 WAITING RB1000 Are all users off the system? BACKUP.JCL.EXPRESS 2 WAITING RB1010 Ready to run TAPEJOB? TAPEJOB.JCL.EXPRESS >RESPOND 1,Y
This would clear RB1000 and allow the job to be streamed by the Dispatcher.
You have nine different commands for altering the schedule, that I found. In alphabetical order you have:
Activate: This allows you to resume a job that has been suspended, rather like the MPE command RESUMEJOB. You can use an OCS job reference number, a schedule ID, the name of a job, or @ for all suspended jobs.
Cancel: This will remove a job from the schedule, the scheduler will treat it as a successful completion. If you specify ‘pending’ it will wait till any job dependencies are satisfied for this job before performing the cancel.
Change: Let’s you modify any of the schedule parameters for a scheduled job.
Delete: Similar to Cancel, but it removes the production control file for the job from the system.
Override: This will override all the dependencies for a job so that it will launch without them.
Respond: This is the command I just showed under runbooks.
Restream: This is the job recovery command. With this you can restart a job that terminated with an irregular status. You can specify a different disk file and start point as well.
Revoke: This is similar to cancel but it pertains to a user-scheduled job. You can only revoke jobs that are still in the SCHED state.
Suspend: This is the opposite of Activate, it allows you to suspend a job, rather like the MPE command BREAKJOB.
Express supports parameter substitution and although I didn’t care for their syntax they did have some absolutely wonderful date manipulation parameters.
For example if you wanted to have Express substitute a lockword for you in a job stream you would say something like:
!RUN PAYR/&1.PUB.FIN
The &1 tells Express to substitute the file’s lockword. For a remote hello password you would use &7 in place of the password. All parameter substitution must be prefaced by the ampersand (&) symbol, this is what tells express that the following text is going to have to be evaluated. You could prompt for a value and assign it to a variable with the following:
&(ASK OUTDEV = “ENTER LIST DEVICE FOR “, $JSNAME)
The $JSNAME is a system defined variable that is the sign on name of the job, in this case the user response would be stored in the variable OUTDEV which could then be referenced in a file equation or something later. The syntax for referencing the variable is a little odd however, consider this syntax:
&(PUT OUTDEV="LP") COBOL MYSRC, MYUSL, &(PUT OUTDEV)
The first line initializes the variable OUTDEV to the value LP, the second line resolves the variable OUTDEV and returns it to the command. I would like to see OCS maybe modify this syntax to be more coherent with the MPE/XL ability to set and resolve variables.
Ok, here is the best part of the parameter substitution, dates. The DATE parameter is of the form &(DATE “formula”), where formula is a formula that is to be translated into a date using the system calendar. Here are some examples of some valid date syntax.
&(DATE "TODAY + 6 DAYS AS MMDDYY") &(DATE "CMONTH + 0 FRIDAYS AS MMDDYY") - The 1st friday of month &(DATE "TODAY + 2 WKS AS JJJ) - 2 weeks from today in Julian day
These are just a few examples, you can mix all sorts of calculations to determine a date offset using julian, current, and fiscal calendar information. This part is by far the best piece of the software.
Ok, I suppose I should touch on how jobs are submitted and schedules are created. There are several ways to get a job submitted and/or added to a schedule. The way the evaluation guide takes you through it you would first submit some jobs to Express using the following syntax:
>SUBMIT WORKDAY1.JCL.EXPRESS;SCHEDID=WORKDY ;DEPENDS=BACKUP.JCL.EXPRESS;LOADDB;SCHED >SUBMIT DAILYRPT.JCL.EXPRESS;SHEDID=WORKDY ;DEPENDS=WORKDAY1.JCL.EXPRESS,TI1600;LOADDB;SCHED
This would set up WORKDAY1 to stream when BACKUP.JCL completed, assigns this job to the schedule WORKDY and load’s the information into the data base. The second job is dependent on the first job as well as having a time dependency of 4:00 pm (TI1600). After you get some stuff set up you will need to compile the schedule with the COMPILE command. This command has various parameters that you can specify, one of the more important being the date that you want to compile. This let’s you set up schedules for different days. Now to get everything going you would issue the START command from within Express.
If you didn’t know what the command syntax was you wanted exactly you can just issue the SUBMIT command with no parameters and Express will step you through all the options. This is a nice touch since all these keywords can start to become overwhelming after a while.
I found this process just a little bit confusing, I think I would rather be able to create a file that clearly laid out the schedules and the dependencies. You might feel differently however.
Now you may be asking yourself how to control those user launched jobs that don’t go through the scheduler. You would do this through their queue management facility. This allows you to set pseudo job limits for jobs that qualify for a queue. So if you MPE job limit is 10 and their are 3 jobs running in the USER queue and the USER queue has a limit a of 3 and a fourth one gets streamed that match’s the job mask for the queue USER, it will wait until at least one of the jobs in that queue logs off. I personally find this kind of thing very useful.
An extensive selection of reports is also available for you to get offline listings of just about anything you could want from the Express system. Their product also includes a module to do spoolfile analysis, this can be a pretty handy function as well as long as you use it.
Unison Maestro: They took a rather musical approach to batch management including such programs as ARRANGER, CHORUS, COMPOSER, STAGEMAN, and data bases named MOZART and STRAUS. As a matter of fact there are no less than 27 program files that are distributed for use with Maestro and strangely enough, not one of them is called Maestro.
It’s a good thing that Maestro has such a good demo guide because at first glance there is a daunting number of programs that you must step through to set things up. I will cover the steps that I went through to set up my first schedule using their demo guide. Maestro has extensive support for networks and inter-cpu dependencies, but here again, since I have no network I couldn’t test it.
The first program is Arranger, and like it’s name it is used to arrange or set up the Maestro environment. This allows you to set up your CPU ID and allocate resources to name but two. A Maestro resource consists of a resource name, the number of these resources that are available and a description. Later when you create a schedule you can allocate these resources to a job, this allows you to enforce certain types of dependencies or single threading of certain types of jobs. I rather liked the concept of the resource, it let me control exactly how many of something could run at a time. It was similar to, but more extensive than the OCS ‘runbook’ concept.
This is also the program where you define your calendars, prompts, report distribution, CPU links, etc. Basically it is your entire config program. The program operates in Block Mode using V-Plus screens and has what I feel is a less than intuitive menu system. The demo guide instructed me to enter menu options that I didn’t even see on the menu, they worked though. Ok, so after I defined my system and created a resource in Arranger it was off to the next program. As you can see Arranger would be used a lot when first setting up Maestro, but later on you would be less prone to need it.
My next step was to compose a schedule using Composer, this program allows you to create, modify, delete and otherwise manipulate schedules. To create a new schedule you would enter the command NEW from the program prompt and then EDIT/3000 will be created as a child process. From here you just use the editor commands to add your schedule. You would typically just do one at a time, a schedule consists of a matched SCHEDULE..END block with all the job parameters and dependencies defined within it, here is an example:
SCHEDULE GLDAILY ON WEEKDAYS: MDEMOJ1 JOBFILENAME MDEMOJ1.MDEMO.CCC NEEDS GLDB PROMPT "Has AP shut down?" MDEMOJ2 ISUSERJOB STREAMLOGON MAESTRO.CCC,MAESTRO NEEDS GLDB MDEMOJ3 JOBFILENAME MDEMOJ3.MDEMO.CCC FOLLOWS MDEMOJ1, MDEMOJ2 END
As you can see we have defined a schedule called GLDAILY that is defined to be run monday thru friday (WEEKDAYS). In this example MDEMOJ1-3 are job streams, everything between each job stream name are parameters assigned to the previous job name. Our first job MDEMOJ1 has two requirements before it will log on, first the resource that I created earlier with an allocation of one called GLDB must be available, second it is waiting until someone responds to the prompt “Has AP shut down?”. Once those are satisfied it will stream the job identified by JOBFILENAME. The second job, MDEMOJ2, is a user stream job and also requires the resource GLDB, since MDEMOJ1 has this resource and only one is available MDEMOJ2 won’t stream until J1 logs off. Since it is a user streamed job we have to tell Maestro how to identify the log on of the job, this is done with the STREAMLOGON parameter. Our third job is MDEMOJ3 and it will not log until both J1 and J2 have finished.
This is a real basic version of how you would set up a schedule, there are all sorts of expressions you can evaluate and parameters you can set for exclusions as well. The free form syntax gives you a great degree of flexibility when defining your schedules.
So anyway, once you exit the editor in Composer, it will compile your schedule, this is basically just a syntax check of what you typed. If there is a problem you will be informed of what it is and asked if you want to re-enter the editor so you can fix it. At this point you will run the Scheduler program and supply it with the schedules you want to process. A pick list would have been nice here so you wouldn’t have to keep writing things down.
Now you run the actual compiler program which is called strangely enough, ‘Compiler’. This will, as Unison puts it, build the interim production control file.
Believe it or not we are still not quite ready to start up a schedule. The manual recommends at this point to generate the pre-production reports so you can review your set up. You have the ability to generate both pre and post production reports from the Reporter program. The reports are actually quite good and easy to understand, they don’t seem to load you up with a lot of unnecessary information but get right to the point.
Ok, now we can take our interim production control file and build the new production control file using the program Stageman. All this program does is build the file from the input file, it has no dialogue to deal with.
Guess what, we are now ready to run the core program of the Maestro system, and that is Conman. This is where you basically control the system, start and stop the scheduler and so on. I started up the background job multiple times at once just to see what would happen, and it started up all the ones I started, but the excessive ones aborted themselves after a minute. They didn’t abort because they took into account that there should only be one job, but because the files it needed had been opened exclusively.
The Conman program will let you change the number of resources allocated for a previously defined resource, you cannot create a new one however, you must use Arranger for that. It is within this program that you will start and stop your schedules, reply to requests, display the status of schedules and dependencies that currently exist. If you are working in a network you would link and unlink CPUs from here as well. You can delete dependencies, alter Maestro job priorities and everything else you can think of associated with the management of schedules.
I want to cover parameter substitution a little bit at this point. The one thing almost everyone wants to use is date expressions. The best way to illustrate is with a few examples:
?($TODAY > 2 DAYS PIC MM/DD/YY) today plus 2 days returned as MM/DD/YY ?($TODAY > NEAREST MONTHEND < NEAREST WE PIC MM/DD/YY) computes the date of the nearest Wednesday before the nearest month-end date that follows the current MPE system date. ?($?(START) > 1 MONTH PIC YYMMDD) the date that is one month greater than the date stored in the variable START.
A nice addition is that you can run the Maestro stream program MSTREAM and just feed it these date expressions so you can test them and make sure their output is what you expect. My biggest complaint with the date expressions is that > means plus and < means minus when an expression is calculated. I would have preferred the use of + and -. The language is rather english like otherwise. The second form of parameter substitution is where you define a variable and a value inside of Arranger and then reference them in a job as in the following example:
The job file: Is streamed as: !JOB GL2,JOHN.CORP,GL !JOB GL2,JOHN.CORP,GL !FILE REPTOPS;?(PRINTER),10,2 !FILE REPTOPS;DEV=11,10,2 !RUN GLOPS !RUN GLOPS !EOJ !EOJ
What I didn’t show is that PRINTER was defined in Arranger to have a value of ‘DEV=11’. This gives you the ability to globally define variables. These can be anything you want them to be. You must remember that parameter substitution only works for jobs that are streamed by Maestro because it has to substitute the parameters at stream time.
The last way to define parameters is with the PARMS program, this allows you to define parameters on the fly. As in our previous example of the parameter PRINTER, we would define it with the PARMS program in the following manner:
:RUN PARMS.MAESTRO.CCC Name of parameter ? PRINTER Value of parameter? DEV=11 Name of parameter ?END OF PROGRAM
I liked how easily Maestro grabbed information from my user streamed jobs, it made it feel as though I had set them all up and run them through Maestro.
The day that I was doing my final testing I had about 30 jobs left over that were still running from my weekend, this gave me a good idea of how Maestro dealt with an environment that it was launched into after the fact.
Maestro also came with a full complement of reports to give you information on pretty much anything you could think of. I especially liked their calendar report program which gave a graphic report of the schedules contained in the specified month. Spoolfile analysis is not included in Maestro, you would need to buy Spoolmate, or the bundle of the software that is called DCM/PAK.
Design3000 JMS/3000: Is another scheduler that has network support, and another one that I couldn’t test in a network. In terms of approach I would have to say that JMS3000 falls somewhere between Express and Maestro. They have only one program where everything is controlled from, which I found to be more convenient. Schedules weren’t created as in the Maestro scenario where you enter an editor and build your schedule. They were done more along the lines of the Express scheduler, where you build them interactively. It seemed as though you had more ability to make modifications in JMS however.
At first glance I thought that there were some major capabilities missing, but a quick look at the manual and a call to tech support cleared up that misconception. This program has some major bells and whistles buried under it’s simple seeming exterior. The first thing I noticed about the program was that when you started up the background job it automatically raised your MPE job limit by one to account for itself. I thought this was a nice considerate touch on their part.
I want to tell you about some of the bells and whistles that are in the software before I start getting to specific. The have a command called JOBSTEP that will let you look at an open spoolfile and see what command it is on. This is a really nice feature to have if you don’t own NBSpool or are using the NM Spooler. They also supported the ability to build and submit a job on the fly which was a nice touch. The other thing I really liked was the ability to essentially build UDCs within the program. You could take any command and parameter list and assign your own command to it. This let’s you configure the environment to your taste. The associated command to this was the ability to dynamically load different function keys from flat files whenever you wanted. The even supply an 8 command redo history which I don’t believe anybody else had.
The JMS system opted to use many of the MPE commands that you are familiar with as commands within JMS to control JMS functions. They are also using queues referred to BS, CS, DS and ES as job queues. These are also tuning queues in MPE to control the dispatcher. I have to say that in my opinion this wasn’t the best design decision as it can cause confusion when dealing with command syntax. This could even cause you to give erroneous information to an MPE command because you were use to the JMS equivalent.
Ok, to get things started I will quote from the demo guide. The first thing we are going to do is start a few ad hoc jobs with a simple dependency between two of them.
JMScmd? ADDBATCH XXX Filename: GOODJOB Filename: BADJOB Filename:JMScmd? ADDBATCH Filename: ANYJOB Filename:
The ADDBATCH command has various parameters that you can assign, it is like a stream command essentially that takes parameters and then prompts for file names. In the first example we said ADDBATCH XXX the XXX is what they call a sequence code, it is basically and string of three characters that you feel like using. What it does is enforce single threading for that sequence code, so the jobs GOODJOB and BADJOB will run one at a time, but ANYJOB can run with either one since there is no sequence code. You could also specify priority, state, and time restrictions to name a few.
It is important to note that JMS assigns it’s own internal job numbers to jobs that has nothing to do with the job number that they will get when they are released to MPE. As far as I could tell JMS never makes use of the MPE job number. Any command that operates on a job can use either the job name or number as a parameter.
There is a command called DEFINE that let’s you build all sorts things like internal pseudo-udcs, calendars and such. As a matter of fact JMS offers so much control and so many permutations of configuration commands that I found myself somewhat overwhelmed by it all. After looking through it however, it appears that you would probably only use about 20% of it on a regular basis.
To build a schedule you would use the ADDSCHEDULE command. This will prompt you for a file name, dates and days for the job stream to run. You can also specify that it is to retrieve a new copy of the source JCL before streaming it. JMS automatically loads all your JCL that it is handling into itself and streams from there, if you will be making updates to the JCL you will want to specify that it is to be retrieved. Dependencies here would be specified with the same sequence code that we used for the ADDBATCH command. If you need to have multiple sequence codes dependent on one sequence code you will have to use the ALTER command to add a completion list to the schedule. You would then be able to freeze part of a schedule by using the LOCKSEQCODE command on a sequence code, this locks the sequence code so that none of the jobs assigned to the code can execute.
JMS also has a facility similar to the Maestro resource, it is the RESTRICT verb. This let’s you set up global restrictions by all sorts of different parameters. You could limit your job queue to 2 for jobs running a certain program, or to 1 for jobs accessing a particular file, even restrict based on job logon. There are other parameters that are available to this command, but you get the idea.
Now how about parameter substitution. JMS has very powerful parameter substitution capability for both dates and regular text information. You can have the job either prompt at submission time, issue a request to the console and get the information from the reply, or an expression that is dereferenced at submission time such as current date. The only thing I didn’t really like about the implementation is that it was a little cryptic, here are some examples.
!FILE PRINTER;DEV=LP,,?\N,1,Number of copies,1,VAR1\
Prompts once (?) for a single digit (N,1) with a default value of one (,1) and puts the value into the variable VAR1 as well as substituting it into the file equation.
!RUN PROG3 #\MDY,F4,Enter report dates(3)\ Prompts multiple times (#), formatting the date as Month xx, 19xx (F4). .\$TODAY\ Todays date in YYMMDD format .\TODAY-3D,1\ Today minus 3 days in MMDDYY format .\$MTHEND+2M,3\ 2 months from end of month in MTH xx, 19xx format .\$MONDAY+1W,4\ 2nd monday in Month xx, 19xx format
As you can see there is a high degree of flexibility, but the cost is prefix’s and formatting codes that you need to memorize. Overall the prompt types are fairly intuitive, but the date codes have no relationship to anything, they are just numerically assigned. You can also define and build fiscal calendars (which I use extensively) using the DEFINE command as I mentioned earlier.
JMS has very nice trapping of user streamed jobs. Without any effort on your part you can have JMS trap your input spoolfiles for recovery and have it manager your user streamed jobs. You can also save off job streams that are still in a WAIT or SCHED state if you want. They also include an internal log with log file analysis, I found this to serve quite well when wanting to look through the status of jobs. Along this line there are various reporting capabilities included to view different aspects of your environment.
NSD JobTime/3000: These are the folks that brought you JobRescue, one of the first and best spoolfile analysis packages. What is interesting is that NSD got into the scheduler game a little backwards from everyone else. The had spoolfile analysis and then wrote a scheduler, everyone else had a scheduler and then maybe they wrote spoolfile analysis. When I originally got my demo the scheduler was an optional function that you could buy with JobRescue, by the time you read this they will have broken it out into a separate product called JobTime. This means you won’t have spoolfile analysis however.
Since the copy I had for review was the version that ran from within JobRescue, I will make my review based on that. I will refer to the product as JobTime since that is it’s new official name. The prices I will be quoting will be for JobTime alone, without JobRescue.
JobTime expects you to maintain your schedule in a file called SCHEDULE.JOBRSQ.NSD. This is a regular flat file and you maintain it with your favorite editor. There wasn’t a provision for loading up the editor for you from within the software. Since JobTime is actually a function within JobRescue in the version that I had, you needed to first run JobRescue and then type SCHEDULE to get into the scheduling program. This is your control program, this is where you would start and stop schedules as well as removing them from the current schedule. JobTime uses two different methods for job streaming, the first is the One Time Scheduler, this is for firing off non-repetitive tasks, exception type jobs. The second type are the scheduled jobs.
Before I get to deep into some of the fuctions I want to show you a couple of simple examples of syntax from the SCHEDULE.JOBRSQ file so that some of the commands make more sense.
$DEFINE SCHEDULE WEEKLY-REPORTS $RUNAT END-OF-WEEK $STEP 1 STREAM DAILTY.JCL.ACCTG $STEP 2 STREAM WEEKLY.JCL.ACCTG $DEFINE END $DEFINE CALENDAR END-OF-WEEK $DAY FRI, 21:00 $DEFINE END
Everything is set up in the $DEFINE..$END blocks, you have SCHEDULE, CALENDAR, CONCURRENT, and CONFLICT. In our example we have defined a schedule called WEEKLY-REPORTS, it will RUNAT the END-OF-WEEK event. Since END-OF-WEEK is defined to be Fridays at 9 pm that means that the WEEKLY-REPORTS schedule will run every friday night at 9. $STEP referes to the order in which the schedule parameters are to be executed, each line can also have dependencies on schedules, jobs, or dates also applied. Here is a good example of the versatility of the scheduling syntax, the second example works just like the first one;
Example 1. $DEFINE SCHEDULE FINANCE-REPORTS $RUNAT EVERY-FRIDAY $RUNAT EVERY-WEDNESDAY $STEP 100 STREAM WEEKEND.JCL.ACCTG $STEP 200 STREAM UPDATE.JCL.ACCTG $DEFINE END $DEFINE CALENDAR EVERY-FRIDAY $DAY FRI, 09:00 $DEFINE END $DEFINE CALENDAR EVERY-WEDNESDAY $DAY WED, 09:00 $DEFINE END Example 2. $DEFINE SCHEDULE FINANCE-REPORTS $RUNAT WEEKLY-REPORTING-CYCLE $STEP 100 STREAM WEEKEND.JCL.ACCTG $STEP 200 STREAM UPDATE.JCL.ACCTG $DEFINE END $DEFINE CALENDAR WEEKLY-REPORTING-CYCLE $DAY WED, FRI, 09:00 $DEFINE END You can get pretty fancy with the calendar specifications too as seen in the next example; $DEFINE CALENDAR SHAWNS-TEST-CALENDAR $DATE 92JAN31, 92FEB29, 92MAR31, 22:00 $DAY MON, TUE(2), 06:00, 00:00, MAR, APR, MAY $WORKDAY 10, JUN, JUL, AUG, SEP, OCT, 22:00 $DEFINE END
The $DATE line indicates that this calendar is valid at 10 pm on the last day of the first 3 months of the year. The $DAY also applies the calendar to every Monday and the second Tuesday at 6 am and midnight during March, April and May. $WORKDAY refers to work days of the month rather than specific dates or names of days, Saturdays and Sundays are not included. So if the first on the month is a Tuesday then the next Monday would be $WORKDAY 5.
You can specify dependecy of schedules within the $DEFINE SCHEDULE blocks by using the $DEPENDSON keyword before you specify the action that is to be taken, here is a small example of a typical dependency;
$DEFINE SCHEDULE END-OF-DAY $STEP 100 STREAM CLEANUP.JCL.SYS $STEP 200 STREAM BACKUP.JCL.SYS (DAILY-BACKUP) $DEFINE END $DEFINE SCHEDULE NIGHTLY-PRODUCTION $DEPENDSON DAILY-BACKUP $STEP 100 START MANUFACTURING-JOBS $STEP 200 START FINANCE-JOBS $DEFINE END
Some of the functions that I am reviewing here are only relevant in version 6.0 of the scheduler which should be in general release by the time you read this. The newer version has added some much needed functionality to the scheduler, making it much more versatile and easy to use. They even added a screen interface to control the software, I didn’t see this however so I can’t comment on how good it is.
I was really torn on JobTime, I really liked the whole DEFINE block thing, I don’t know why, but I felt like I had a lot of control over how things worked. The program makes almost no assumptions about anything and requires you to set up everything. Parameter substitution is implemented by defining the parameters and their values inside a PARM file and then referencing the PARM file in the $STEP verb. I found this approach to be rather cumbersome to use and poorly documented.
If you have ever used Security/3000 from Vesoft and had to set up their SECURCON file you will probably be at home with JobTime, they seem to be similar in terms of approach. I really wish I could have seen their new screen interface since the only other product that had one for the full system was Masterop. There is a lot of power and versatility in this program even though of all 5 reviewed here it is the newest one, in some respects this ‘newness’ is obvious, but it is a well thought out product.
Usability (also installation)
OCS: I have to say that OCS had one of the most impressive installation programs of anyone, but it took about an hour to install. Since my shop streams almost everything based on an internal fiscal calendar I was very impressed with the way the installation figured out what my calendar was and built it for me. They distribute almost everything on the tape in a compressed format and must de-compress it during installation which adds greatly to the install time. I would assume they are compressing the files because they can’t fit the software on to one tape. I did notice that the installation took quite a bit of disk space compared to some of the other schedulers.
I found that OCS approach to building a schedule to be more difficult than was strictly necessary. There didn’t seem to be a way to just build a schedule, you had submit things and assign the dependencies and such at that time as opposed to building a schedule in a file and then launching it. This approach might be more intuitive to other people than it was for me however.
Unison: They took a slightly different approach, you restore a program file and then run it, this will then create and launch the install jobs without requiring you to add passwords to a job stream and potentially leaving a security risk lying around. They did not however clean up their install program once they were done and left sitting in my SYS account. They also build the accounting structure for all their products, not just the ones you are installing, they don’t put files in them however, and actually the full install of Maestro took less than 17k sectors which I found to be extremely reasonable.
Maestro is a very mature scheduler and has a good reputation in the industry, and I think it deserves that reputation. I am however reminded of a scene from Amadeus where the king says to Mozart “There are just too many notes”, Maestro suffers from just to many programs. I understand that integrating thme is a hot topic for Unison, after getting the Unix version out. Hopefully now that they have such a robust scheduler they will have a little time to go back and integrate it a little more.
Design3000: They took the standard approach of restoring a file to the PUB.SYS account and group followed by editing the file for passwords, streaming the job and putting the tape drive back on-line. They did a nice clean up job afterward and the installation was quick. The only thing I didn’t care for was the fact that they automatically print out their README file, which in my case was about 1/2 inch thick. I understand that they want you to read this, I just didn’t like it to start printing all by itself without my knowing it was going to happen. Their install took less than 16k sectors of disk space which was about the same as Unison, so I was pleased with that.
JMS has been around for quite sometime, but they don’t make as much noise as the big boys so it is possible you weren’t aware that the program was out there. Design3000 is a small shop and it’s nice to have that sort of personal feel that you get from a small place. I think they will probably be around for some time to come. There scheduler has some nice features not found in any of the others and has one heck of a lot of power. The program is a little confusing and might get to be a little unwieldy if you are manipulating a really large schedule environment.
NSD: Once again we have the standard restore a job and stream it, that is common with most software installations. Everything installed easily and quickly with a minimum effort on my part. Their installation took by far the least amount of disk space coming in at just 10k sectors.
The NSD approach to building a schedule seemed to be one of the most straight forward, but it left all defining up to the user. This might be an annoyance if you don’t want to have to do everything, or it could be a great asset, giving all the control over how your environment is set up. I like being able to set everything up in a file and be able to control it all in one place. I can see this approach becoming a little unwieldy if you are going to have a real large schedule. It could take some time to go through the file and find what you are looking for. They could almost benefit from enforcing some sort of structure to the schedule file, of course due to it’s free form nature you could make up your own structure to follow.
Performance and Reliability
There is no sense in covering each of the products separately since they all come in equal in these categories. Since all the schedulers have a background job that sits and waits for something to happen, they take a negligible amount of CPU according to PROBE.
There is one thing I would like to distinguish however, I use MPEX all the time where I am at, and I am used to having a huge process tree of suspended programs. This reduces the load on my system when I run them and it speeds them up when they run. The only scheduler that I found that worked well in a process handling environment was Masterop. It would suspend itself, talk to MPEX, NBSpool and SPOOK directly from the program. It also included a very sophisticated UDC handler, you could load in any UDC file that you wanted so you could execute commands from inside Masterop instead of exiting. You could also build UDCs that were valid Masterop commands that you could then use inside the program thus making it possible to customize your environment to your taste. There is a similar UDC type capability in JMS that let you DEFINE commands and their parameters.
Nobody crashed my system and everybodys software did what I expected it to and what it’s documentation said it could do. There really weren’t any surprises, just some false starts on my part. All any batch scheduler really needs is the patience to learn and install it.
Supportability (including Doc)
OCS: My first encounter with OCS’s tech support was not an encouraging one. I had a simple question that I already had the answer to but I wanted to see what their response was. I asked if their software would interface with STREAMX by VESOFT. Now I don’t currently own STREAMX, but I may want to get it and it is important to me that a scheduler would work with it. Now STREAMX will interface with just about anything the only thing you need to know from the scheduler vendor is what programs need to be patched. So that was my question and the response I got was ‘No, why would you want to do that anyway?’. Since I already knew that they would interface I was a bit put off by this third party rivalry attitude. After talking to my sales rep I found that this person was new to tech support and I talked to a more senior person and got the answer I expected. I talked to their support one last time and found it to be very good. So except for the first encounter my interaction with them was positive.
The documentation included with Express overall is very complete, but in some areas it seems to make too many assumptions about what you should already know by the time you read it. It also doesn’t flow all that well. I am not usually one to read a manual from cover to cover so I found this a little irritating. The programs online help is good and allows you to get a summary list of commands. I liked the quick reference card that was included with the software, these are always a nice touch.
Unison: I like having the quick reference guide and the demo guide included with the software. These are two good tools for learning the basics of the software quickly and an easy way to look up commands. The reference manual is well layed out and has a lot of examples which is always good. The first time I called their tech support I identified myself as a demo site and they informed me that everyone was busy and someone would have to call me back. It took about an hour before I heard back from them, once we did talk however they were very knowledgeable and friendly. In the four times that I called tech support I never got someone immediatly. It might have been because it was their fiscal year end and they had a lot of demo sites out.
They also have an option they call AIO which is the Application Interface Option. This is so Maestro will know about and be able to deal with jobs that are built inside a program and there is no job stream file on the system. This greatly enhances your ability to recover in the case of a system failure as well as adding substantially to the control you have over the batch environment.
Design: The system came with a demo guide and user manual, my complaint with both of them is that there are plenty of examples but not really enough explanation of the examples or real world scenarios. The online help is good but slightly inconsistent, sometimes when you asked for help on an invalid command you were told that it didn’t exist, other times it just beeped (they are going to fix this). When you said HELP just to get a list of commands, there were some that were displayed, but you couldn’t get help on by typing HELP cmd. I found this to be just a bit odd.
Design also includes the ability to add calls to your programs to submit jobs directly to their scheduler automatically. I really like the fact that they can grab input spoolfiles from wherever they are created and keep a copy for recovery purposes, this was really slick. The phone support that I received was very good, I always reached a tech immediately and got the answer that I needed.
NSD: Their manual serves as a good reference with plenty of examples, but there is no demo guide included. All you get is about 20 pages at the beginning of the manual that walks you through some of the basics.
Summary
I don’t think it would be entirely fair to tell you which one I recommend the most since they are all similar in functionality. I almost wish 2 or 3 of these companies would get together and integrate all the best parts of each of their systems into one giant beautiful program. The price range on all the products ranged from $4500 up to $8500 for my series 70. Here is a checklist of things you should go through when you decide to look at and buy a batch scheduler.
1) Do you really need one because they are not trivial to install
2) Price
3) Upgrade fees – some of these guys will take you to the cleaner on upgrades
but if you are not going to be upgrading your CPU then it’s not a problem.
4) Performance
5) How good is their support
6) How easy is the software to install and use – you should only need to call
support if you find a bug in the software, after you get past the learning
curve.
7) Does it solve the problems for your environment, i.e., networks, user
streamed jobs, job queueing, STDLIST analysis, etc.
8) Keep in mind that biggest/most expensive isn’t necessarily best for you
and your shop. Don’t get snowballed by sales pitch’s.
9) After you leave your job is your replacement going to have to keep
calling you to find out what you did?
I can’t say how often I have seen a shop end up with a bad solution because of a slick salesman so just be careful. I wish you luck when you start to look at schedulers, I know I didn’t have an easy time deciding which one I would get for my shop. Make sure you take your time and test each part that is important to you. Don’t let a vendor talk you into giving them a copy of another vendor’s documentation. This use to go on a lot, but I think it has stopped overall today.
Well I could prattle on for a while about what you should do, but ultimately it will be your decision. As you can see from the review, even the relatively unknown schedulers had a lot to offer so don’t be afraid to look at them too. I want to add that the only schedulers that didn’t use an Image data base to store information were Masterop from Kemp software and Jobtime from NSD. This can be a concern if you opted to delete Image from your Spectrum system and are using Allbase or some other data base system that isn’t image.
At-a-Glance box +------------------------------------------------------------------+ | Express, version 6.03.05 | | OCS | | 560 San Antonio Rd | | Palo Alto, CA 94306 | | Phone: (415) 493-4122 FAX: (415) 493-3393 | | Call write or fax for more information or demo | | Includes a one inch thick manual an evaluation guide and a quick | | reference card. | | Price ranges from $5500 to $32,500 and support is $1170-$7475 | | | | Good Points | | Great installation program | | Terrific date handling and calendar creation | | Ability to create logic blocks that can be applied to process's | | | | Bad Points | | The manual doesn't adequately explain some functions very well | | A rather obscure command syntax | | Strange mix of command driven and menu driven interface | +------------------------------------------------------------------+ +------------------------------------------------------------------+ | Maestro, version C.02.04 | | Unison Software | | 675 Almanor Ave. | | Sunnyvale, CA 94086 | | Phone: (408) 245-3000 FAX: (408) 245-1412 | | Call write or fax for more information or demo | | Includes a one inch thick manual, and evaluation guide, an intro | | guide and a quick reference card. | | Price ranges $6200-$26,140 with support being $1200-$5100 | | volume purchases and public institutions are discounted | | | | Good Points | | | | Versatile scheduling language | | Good parameter substitution and date handling | | Versatile dependency allocation | | | | Bad Points | | | | A non-integrated environment | | Cryptic date handling syntax | | Inconsistent command abbreviations | +------------------------------------------------------------------+ +------------------------------------------------------------------+ | JMS3000, version 8.1.4 <0603> | | Design 3000 Inc. | | 1214 Hawthorne Ave. N.E. | | P.O. Box 13086 | | Salem, OR 97309-1086 | | Phone: (503) 585-0512 FAX: (503) 585-1706 | | Call write or fax for more information or demo | | Includes a one inch thick manual, and evaluation guide | | Price ranges $4800-$26,900 with support being $815-$4550 | | volume purchases are discounted | | | | Good Points | | | | Real nice recovery ability | | Ability to look at open spoolfiles | | Versatile parameter and date manipulation | | | | Bad Points | | | | Use of MPE commands for different purposes under JMS | | Inconsistent command abbreviations | | Codes used to describe date formats | +------------------------------------------------------------------+ +------------------------------------------------------------------+ | JobTime, version 5.1 | | NSD Inc. | | 1400 Fashion Island 4th Floor | | San Mateo, CA 94404 | | Phone: (415) 573-5923 FAX: (415) 573-6691 | | Call write or fax for more information or demo | | Includes a half inch thick manual | | Price ranges $1200-$6000, support $240-$1200 for JobRescue | | scheduling option is $500-$2500, support $100-$500 with a price | | increase coming by the time you read this | | | | Good Points | | | | Versatile and highly configurable scheduling | | | | Bad Points | | | | No demo guide | | A lot of syntax to learn | +------------------------------------------------------------------+