Planning for DBS
This section includes topics for consideration when planning for DBS.
DBS is designed to manage your tape drives with a minimum of intervention. Once the initial configuration definition is completed, DBS is normally transparent to both operators and users.
Before we can discuss the Configuration process we must introduce the Automation File.
The Automation File
DBS stores the Configuration and Policies in the Automation File. This file is like a Coupled Dataset for Automation Services, and is also used by SLM. It is shared among all systems in a JESplex. Installations with more than one JESplex will need one Automation File for each JESplex.
The Automation File is initialized and maintained using ISPF dialogs. When you define an Automation File for the first time, information describing the JESplex is collected. Since this description is used by other automation facilities, the Automation File for a particular JESplex needs to be created and initialized only once.
There are no performance considerations. You need only specify a volume that is shared across the JESplex. The ISPF dialog performs the actual allocation and initialization.
To activate a DBS Configuration from the Automation File:
- The Control File must be converted to Version 6.
- The DBS option must be enabled on the JES2 initialization statement TMPARM.
- Use the DBS configuration dialog in ISPF to define your Configuration and any Policies you want to implement.
Configuration Definition
When referring to the creation of the initial or NEW configuration, we use the word definition as opposed to planning. The exercise is really a “fact-finding” process. The configuration in your installation is what it is, so you must collect the following information from your tape subsystem support group:
- The vendors that are present in your installation. DBS supports two hardware vendors and two software vendors:
- IBM
- StorageTek
- CA-Vtape
- EMC CopyCross
- The operational modes for each vendor:
- Manual
- Robotics
- Virtual
- The names of all the libraries managing the devices. (The “names” for StorageTek robotics are predefined as ACS00 to ACSnn.)
- The way the actual transports have been SYSGENed. This is IBM’s generic device type, for example 3490E.
- The actual device type and its description depending on its generic device type. For more information, see Supported Devices.
- The actual device numbers.
- Any asymmetries in the I/O configuration. For example, some transports might not be accessible to all the participating systems in the JESplex.
Without this information, there is no point in attempting to create the initial DBS configuration. The needed details should be readily available from the technical group that supports the z/OS I/O configuration.
The process of creating a Configuration is highly structured. Here we will discuss only the creation of the initial Configuration, which will be given the status of NEW. It maintains NEW status until it is activated. Once a NEW Configuration is activated, the status NEW is no longer used. When creating a future Configuration its status will be NEXT. The actual Configuration management cycle is described in Administering DBS.
It is important to review carefully the information gathered from the z/OS hardware support group. You should include only the transports that DBS is to manage. If you have transports that are not associated with traditional z/OS systems and as such are not available to allocation, they must be omitted from the definition.
The creation of a Configuration is the first step in the process. You can do this without having yet decided the Policies to be created. When the NEW Configuration is saved, a default Policy is automatically created under the name **BASE**. This Policy simply reflects the Configuration.
In your Configuration you can include “future” devices, so you can be prepared for future expansion (as long as it is known what the future is).
The Configuration is the framework that tells DBS what devices are to be managed.
Policy Planning
This is a planning process.
As already indicated, an initial default Policy named **BASE** is created automatically. This Policy is a one-to-one reflection of the Configuration. If your actual configuration is totally symmetric, then so is the default Policy. If the Configuration reflects hardware asymmetries, then so will the default Policy.
The **BASE** Policy can be edited to reflect whatever changes you might think are appropriate; however, since this Policy is your fall-back in case of problems, you should keep it as simple as possible.
The most obvious type of change you might want to make to the **BASE** Policy is the elimination of any transports that have been included to reflect “futures.” Since they are not present, the actual counts reflecting availability should not include these future drives.
The construction of Policies requires a review of a number of considerations. The Configuration creation is a factual process. Policies, on the other hand, reflect deployment decisions. As such, a number of value judgments go into Policy creation.
We will defer a discussion of Work Group management at this time. This aspect of DBS requires management decisions with regard to resource allocation (who gets what). It is less a technical issue and more a business importance issue. A separate section below addresses the technical aspects of Work Group resource allocation.
In installations with a single JESplex and only one LPAR, the permutations and combinations on how to deploy tape transports are limited, so creating Policies should be simple. The creation of separate Policies might reflect situations where the installation wants to run restricted tape workloads during certain time of the day.
For example, there might be periods during the evening where the backup process for your distributed servers (assuming you back them up centrally) requires as many tape transports as possible. You might still want to run some critical batch work that requires tape. A Policy can address that need by simply restricting the number of drives available to batch during that period.
JESplexes with multiple LPARs present a number of alternatives to transport deployment under DBS management. A number of questions should therefore be considered, since it is possible to alter load distribution with simple Policy changes:
- If the hardware is symmetric:
- Do you want to run an “equal opportunity” JESplex?
- Do you want to bias the tape workload towards a particular LPAR?
- Do you want to direct certain type of work (for example, manual library requests) to only one system?
- Do you want to stop or minimize tape processing during a particular period of the day:
- For the whole JESplex?
- For a single LPAR?
- With Policies, you can make your hardware appear to be as asymmetric as you want.
For example, you might have two LPARS and two Virtual Tape subsystems. During normal operations LPAR A has access to VT1 and LPAR B has access to VT2. You can create this asymmetry with a Policy (even though the hardware is accessible to both LPARs). DBS will select jobs only in the correct system. If one of the LPARs fails and the workload has to be accommodated in the other LPAR, a simple Policy change will do that.
- You might want to consider Policies for hardware failures, or for hardware service.
For example, if you have multiple ACSs you can create appropriate Policies to pre-plan how to handle a particular ACS being out of service.
All Policies should have a name and description that reflect their purpose.
Work Group Planning
Because this area represents an allocation of resources to different workloads, it requires careful consideration. The mechanism provides significant flexibility to adjust resource allocation quickly. Activating a different Policy can drastically alter the tape resources available to a group.
Note that DBS Work Groups are optional.
What is a Work Group?
A Work Group is a collection of batch jobs that your installation considers to be related. The relationship is whatever you want it to be. It could be a similar level of importance even though the jobs come from different areas. It could be a type of work, such as production versus on demand. It could represent different divisions of the company.
The only requirement, from a DBS point of view, is that the jobs can be identified in JAL. The full power of JAL and DAL is available to identify the desired grouping. Once this is done, the job is assigned to the chosen Work Group.
Work Groups Versus Drive Pools
The Work Group concept is easy to understand, but if some simple rules are not followed it could be confusing to implement. Without it, DBS manages tape drives as z/OS does. There is no particular relationship between jobs and the available tape resources. All jobs are treated in a similar manner. Either there are drives (of a particular type) for everyone or for no one. There is no “best fit” algorithm or preference for jobs requiring fewer tape transports. The actual choice of which jobs is to be selected next is left to either the traditional JES2 class selection or to WLM Service Classes.
The role of DBS is to decide, once the job selection algorithm has made its choice, whether or not the job can be allowed to continue to initiation. At job selection time the decision is rather simple: either the pool(s) of drives needed by the job is opened or it is closed. If opened, the job is allowed to proceed; if closed, the job is bypassed.
The algorithm to determine when drive pools are opened or closed is complex and takes into consideration several factors. The approach is similar for Drive Pools and Work Group Pools. There is, however, an essential difference that is important to understand before values for Work Groups are assigned.
Drive Pools are defined at the lowest possible level of device groupings. For example, if you have an IBM automated library, let’s say TAPELIB1, with 8 3490s and 16 3590s, then DBS constructs two Drive Pools. One pool represents the 8 3490s, the other pool the 16 3590s.
If your installation were to add another similar library, TAPELIB2, then DBS constructs 4 Drive Pools. For Drive Pools, drives that are part of TAPELIB1 are not considered to be part of the same pool as drives from TAPELIB2. They are not interchangeable, even though they are the same type of drives. This has to do with volume accessibility.
For the purpose of defining values for Work Groups, the aggregation is different. When designing DBS it was decided that managing them at the lowest level was impractical: too much detail and too much segmentation. It would have made the task unnecessarily complicated. So the decision was to define Work Groups at the Vendor/Mode/level.
In the example described above, the definitions for Work Groups will be at the IBM/ AUTOMATED mode. Only two Work Group pools are created regardless of the number of similar (automated) libraries that are defined. So, from a Work Group definition point of view, you are given the opportunity to apportion drives from two Work Pools:
IBM/AUTOMATED/3490/3490E 16 Drives
IBM/AUTOMATED/3590/3590E 32 DrivesTo repeat:
- Drive Pools are defined at the lowest possible level.
- Work Group Pools are defined at a higher level: VENDOR/MODE level.
The ISPF dialog automatically aggregates the counts, so you know what the total numbers are at any level.
The mechanism allows you to define a value that represents the maximum number of drives (of a given vendor/mode/type) that a particular Work Group can have allocated at any given time.
Apportioning Drives Using Work Groups
The second and sometimes confusing consideration is the process of apportioning the drives. This must be done as a separate process and in agreement with whatever group or groups need to be consulted in your installation. This differs from place to place because of organizational arrangements. You can apportion drives in different manners. What follows is a discussion of the facilities provided by the Work Group option.
You can dedicate drives to a particular Work Group. As the name indicates the drives (a count, not specific device numbers) are for the exclusive use of that Work Group. The dedicated number of drives—let’s say 5—are not available to any other Work Group even when they are idling. Again, what is dedicated is a number of transports, not devices such as 0F01, 0F02, etc. If we use an airline booking analogy, DBS reserves the appropriate number of “tickets” with no specific seating arrangement. Standard allocation when a “ticket” holder arrives takes care of the actual “seating arrangement.”
Of course, the total number of dedicated drives cannot exceed the number of available drives. Normally, the number of dedicated drives should be smaller than the number of available drives, so the difference between available drives minus dedicated drives represents the number of non-dedicated drives.
By default, all Work Groups contend for access to the non-dedicated drives: first come, first served. At any given time there might not be any available; at another time, a particular Work Group could be hoarding most if not all of the non-dedicated drives. You can handle the problem described above by controlling the maximum number of drives available to a particular Work Group. This is known in DBS as CAPPING a Work Group.
When a Work Group has been CAPPED, it does not matter whether or not additional drives are available and are idle. The CAP is a hard restriction.
The value associated with a CAP represents the sum total of the dedicated drives plus the “get-them-if-you-can” non-dedicated drives. So, assume that the values assigned to a Work Group are as follows:
Dedicated 6
CAP 12This means:
- The Work Group will always have at its disposal 6 drives.
- In addition, it will be able to contend for another additional 6 drives.
The Work Group dialog performs all the necessary calculations. As a result of 6 drives being dedicated, the number of non-dedicated drives available is reduced by 6. Of course, it also ensures that there are, at least, 6 non-dedicated drives.
The dialog is very fluid because as soon as you dedicate drives the number of non-dedicated drives change.
Implementing Work Groups
What the dialog cannot do for your installation is decide how to create effective Work Groups and the apportion drives across them. To implement Work Groups, you have to do the following things:
- Determine whether you want to implement Work Groups at the JESplex level or at the individual JESplex member level.
- Decide how jobs are to be grouped. It could be as simple as production versus non-production.
- Based on the requirements resulting from the groupings, apportion drives to balance the different requirements.
- Consider constructing more than one Policy if needs change during the day. Your production workload might need most of the drives during some critical periods. This is easily done with different Policies.
Before considering the segmentation of your work load into different Work Groups, you should clearly understand that Work Groups are defined at two interdependent levels:
- The higher level allows for three partitions.
- Each high level partition can be further subdivided into two.
- The total number of Work Groups is therefore six (3 x 2).
The DBS Work Group structure is represented in the diagram below.
The actual apportioning of the available tape transports is done at the highest level. That is, they can be partitioned into three mutually exclusive segments. The first level can then apportion whatever drives have been apportioned to it from the general pool to its two Work Groups.
The first level partition represents a global decision, where all tape users are affected.
The second level partition is a local decision, because no additional drives can be apportioned that have not already been given to the first level. If no dedicated drives were given to the first level, no dedicated drives can be given to the second level. If the first level was CAPPED, the value of the CAP is automatically reflected in the second level.
In summary:
- Dedicating drives eliminates the possibility of a Work Group ending up with no drives.
- CAPPING eliminates the possibility of a Work Group hoarding all the non-dedicated drives.
- The overall allocation of drives is done at the first level of Work Groups.
- The second level of Work Groups is restricted by the values assigned to its corresponding first level.
- Drives that are dedicated to the first level do not necessarily have to be dedicated to the corresponding second level.
- All Work Groups are given unique names (installation chosen).
- When assigning a Work Group to jobs in JAL, the name of the second level is used.
- DBS displays show the name of the first and second level.
- Work Groups are associated with a Policy, not a Configuration.
The Definition Process: What Should I Have?
Before starting the DBS Configuration and Policy definition process you should have information that is similar to the following example:
- The configuration has to be named. Let’s call it CONFIG01.
- The JES2 node name is needed: CAMPUS1.
- For verification purposes, the SPOOL data set name is requested. Let’s say SYS1. HASPACE.
- The SPOOL Volume prefix: CAMP1 in this case.
- The Participant JES2 member names. In this installation there are two LPARS: SYS1 and SYS2.
- The vendor or vendors, in this case IBM.
- The Modes, in this case Manual and Virtual.
- The high level devices: for Manual, 3590-1; for Virtual, 3490V.
- The actual devices under the high level: the 3590-1 are 3590E, the 3490V are 3490E.
- The VTS Library name, VTS01 for this example.
- Whether you plan to use Work Groups: NO initially.
Now the device counts and device numbers are needed:
- 6 3590E devices:
- Devices numbers 0360-0365.
- 64 Virtual 3490E devices:
- Device numbers 0380-03BF.
The above information is all that is needed for DBS to do its basic job. We will start with the **BASE** Policy.
To sum up:
CONFIG NAME CONFIG01
JESPLEX NAME: CAMPUS1
SPOOL DSNAME: SYS1.HASPACE
PREFIX: CAMP1
JES2 MEMBER NAMES: SYS1, SYS2
VENDOR: IBM
MODE: MANUAL
DEVICE TYPE: 3590-1
SUBTYPE: 3590E
DEVICE COUNT: 6
DEVICE NUMBERS: 0360-0365
MODE: VIRTUAL
DEVICE TYPE: 3490V
SUBTYPE: 3490E
DEVICE COUNT: 64
DEVICE NUMBERS: 0380-03BFWith the above information you can create the NEW Configuration. The dialog will automatically create the **BASE** Policy.
Let’s introduce Work Groups. You could do that by editing the **BASE** Policy, but we do not recommend that approach. The **BASE** Policy should be kept as simple as possible as a fall-back in case of problems. A new Policy should be created.
For the purpose of this example, let’s assume that the Work Group definitions are symmetric. (It is possible to have different values for each system.) Further, let’s assume that Work Groups are not needed to manage the Manual devices.
So, for the virtual devices we want to create Work Groups. In order to do so we have to do some fiction writing.
- This installation wants to divide its batch workload into three FIRST LEVEL Work Groups:
- PROD for production jobs
- NON_PROD for non-production jobs
- DEV for the development group
- The PROD Work Group is to have two sub-groups:
- SPECIAL
- NORMAL
- The NON_PROD Work Group is to have two sub-groups:
- TYPE1
- TYPE2
- Again, the DEV group has two sub-groups:
- DEV1
- DEV2
After the usual endless meetings, the drives (64 of them) are allocated as follows:
- PROD has access to as many as 48 drives. Of the 48 drives, 16 are dedicated. 16 drives are not available to PROD.
- NON_PROD gets to access up to 32 drives. They are given 8 dedicated drives.
- DEV has access up to 24 drives. They are given 6 dedicated drives.
Now we can examine how to express these values to DBS. For LIBRARY 1, named VTS01, and for 3490E devices, the following Work Groups are defined:
WORK GROUP 1
NAME: PROD
DEDICATED: 16
CAPPED: 48
WORK GROUP 2
NAME: NON_PROD
DEDICATED: 8
CAPPED: 32
WORK GROUP 3
NAME: DEV
DEDICATED: 6
CAPPED: 24You can now divide each Work Group into two subgroups. Here the decisions are no longer “global.” The Groups are apportioning what is already theirs. As a result the meetings should be smaller:
- Let’s assume that the PROD group wants to dedicate 4 drives for some special jobs. The rest is all contention among “normal” jobs.
- In the case of NON_PROD they want to dedicate 6 drives to one subgroup. All the other drives are available to both sub-groups.
- Finally, the DEV group wants to do a fair split of drives for the two subgroups but ensure that they cannot be monopolized by either group.
What follows is the implementation of the above rules. They probably represent several days of discussion, but with the magic of DBS they can be implemented in minutes.
The complete Work Group Definitions, if they were to be entered in a table as opposed to a dialog, would look like the following:
WORK GROUP 1
NAME: PROD DEDICATED: 16 CAPPED: 48
SUBGROUP 1
NAME: SPECIAL DEDICATED: 4 CAPPED: 4
SUBGROUP 2
NAME: NORMAL DEDICATED: 12 CAPPED: 44
WORK GROUP 2
NAME: NON_PROD DEDICATED: 8 CAPPED: 32
SUBGROUP 1
NAME: TYPE1 DEDICATED: 6 CAPPED: 32
SUBGROUP 2
NAME: TYPE2 DEDICATED: 0 CAPPED: 26
WORK GROUP 3
NAME: DEV DEDICATED: 6 CAPPED: 24
SUBGROUP 1
NAME: DEV1 DEDICATED: 3 CAPPED: 21
SUBGROUP 2
NAME: DEV2 DEDICATED: 3 CAPPED: 21Is it obvious what the numbers mean? Well, they may require a bit of an explanation since each Work Group affects the others:
- The maximum (CAPPED) that Work Group PROD can ever allocate is 48 drives.
- If that were to occur then there are only 16 drives left for NON_PROD and DEV.
- Of the 16 drives, 14 are already preallocated because NON_PROD has a DEDICATED count of 8 and DEV has a DEDICATED count of 6.
- So, in this situation there are only 2 drives for NON_PROD and DEV to contend for.
- The DEDICATED count indicates that regardless of the availability of work for that Work Group, the drives are reserved and not accessible to other Work Groups.
For example, even if no jobs for Work Groups PROD and NON_PROD are in the JESplex, the maximum number of drives available to DEV is 40 even if no restrictions were placed on DEV. As defined, the Work Group is restricted to 24.
- The CAPPED number of drives includes the “dedicated drives” (DEDICATED), which could be zero, and the shared drives which could be zero also.
- The sum total of the DEDICATED values for SUBGROUPS cannot exceed the DEDICATED value for the Work Group to which they belong. This means that you cannot dedicate drives to your SUBGROUPS that have not been dedicated to the Work Group. The reverse is not necessarily true. You do not have to dedicate drives to the second level just because the first level has dedicated drives. You can allow the corresponding two Work Groups in the second level to have an equal chance to get any of the dedicated drives. The Work Group PROD/NORMAL could be defined as DEDICATED 16, CAPPED 48 or DEDICATED 0, CAPPED 48 with equal results. This is a consequence of the way PROD/SPEC is defined.
Note that the calculations are done automatically by the Work Groups definition dialog. For example, if there are 64 drives in the Drive Pool and you assign a DEDICATED count of 24 to a Work Group, the number of drives available for apportioning to other Work Groups is reduced to 40 automatically.
Now we can show why you can define multiple Policies to handle particular situations. Let’s say that the situation described above represents daytime drive allocations for Work Groups. This installation re-apportions the tape drives differently after midnight to reflect the needs of the production cycle. At that time they apportion most of the drives to the production group. The development group does not have any drives during that period. This Policy might be called NIGHT:
POLICY NAME: NIGHT
WORK GROUP 1
NAME: PROD DEDICATED: 40 CAPPED: 56
SUBGROUP 1
NAME: SPEC DEDICATED: 4 CAPPED: 4
SUBGROUP 2
NAME: NORMAL DEDICATED: 36 CAPPED: 52
WORK GROUP 2
NAME: NON_PROD DEDICATED: 0 CAPPED: 18
SUBGROUP 1
NAME: TYPE1 DEDICATED: 0 CAPPED: 18
SUBGROUP 2
NAME: TYPE2 DEDICATED: 0 CAPPED: 18
WORK GROUP 3
NAME: DEV DEDICATED: 0 CAPPED: 0
SUBGROUP 1
NAME: DEV1 DEDICATED: 0 CAPPED: 0
SUBGROUP 2
NAME: DEV2 DEDICATED: 0 CAPPED: 0By simply activating the “night” Policy, the Production group has control of most of the drives.
Another Policy can be created in case the Production Group runs into serious difficulties. The EMERGNCY Policy might look like the following:
POLICY NAME: EMERGNCY
WORK GROUP 1
NAME: PROD DEDICATED: 4 CAPPED: 64
SUBGROUP 1
NAME: SPEC DEDICATED: 4 CAPPED: 4
SUBGROUP 2
NAME: NORMAL DEDICATED: 0 CAPPED: 60
WORK GROUP 2
NAME: NON_PROD DEDICATED: 0 CAPPED: 0
SUBGROUP 1
NAME: TYPE1 DEDICATED: 0 CAPPED: 0
SUBGROUP 2
NAME: TYPE2 DEDICATED: 0 CAPPED: 0
WORK GROUP 3
NAME: DEV DEDICATED: 0 CAPPED: 0
SUBGROUP 1
NAME: DEV1 DEDICATED: 0 CAPPED: 0
SUBGROUP 2
NAME: DEV2 DEDICATED: 0 CAPPED: 0If this Policy is activated, then NON_PROD and DEV groups should go fishing.
DBS and Job Limiting (JLS)
Prior to the availability of DBS, the Job Limiting Services facility of ThruPut Manager has been the mechanism for controlling tape usage. DBS has largely taken over this function, but for some installations, JLS still has a role to play. This section explains how JLS can be used to extend the control over tape device apportioning even beyond the level provided by DBS Work Groups.
The Old Role of JLS
JLS is a general purpose facility. As such, it is a more abstract mechanism than DBS. DBS “knows” that its mission is to help with the process of tape drive management; JLS does not. DBS deals with actual device counts, while JLS deals with abstract counts. As a result, the installation has to make the connection between the abstract counts and the actual number of tape drives available. ThruPut Manager assumed the responsibility of establishing high watermarks for the job and, if requested, reducing them at step termination if the step in question was the one reflecting the current high watermark.
The job selection mechanism of ThruPut Manager determined, among other things, if there were enough units (in the abstract count) to accommodate the actual count associated with the job. If not, the job was not selected.
Since the number associated with the job represented the high watermark, underutilization of drives occurred unless the installation overbooked the tape drives by defining an abstract number higher than the actual number.
The New Role of JLS
None of these considerations are pertinent when DBS assumes the tape drive management. DBS has overbooking algorithms built in. They apply to general Drive Pools as well as values associated with Work Group Pools. You might, however, have situations where you want to restrict access to drives beyond the segmentation provided by DBS Work Groups. In essence, you want a finer degree of granularity. This has to be done with JLS.
To facilitate this process, DBS introduces a set of JAL Descriptors that reflect the values used for the management of DBS Work Groups. These new Descriptors represent the high watermark of a particular DBS Pool. Following normal JLS rules, the Descriptor can be associated as “weight” for a given Limiting Agent, which in turn is associated with the job.
The high watermark value for these Descriptors is adjusted (downwards) as required. This occurs automatically for any Limiting Agent that uses the new DBS Descriptors.
An Example of Using JLS to Extend DBS
Let’s discuss an example where Work Groups are extended by the use of JLS.
One of the Work Groups, which we will call DEV2, represents an IS development group. The individual developers do not have jobs that require more than 2 tape drives. In most cases, one drive is all that they require for an individual job; however, it is possible for a developer to submit multiple jobs at once. This prevents other members of the group from having access to drives until the jobs from that particular developer terminate.
When the general apportioning of drives to Work Groups took place, DEV2 was given a maximum of six drives. Within the facilities provided by DBS, there is no way of creating further restrictions to control individuals, even though the members of the group are prepared to accept a restriction of two concurrent drives per individual. Without the implementation that automatically controls the agreement, the only way is to manage the jobs by submitting them in HOLD.
Here is where JLS can play a role by complementing the Work Group segmentation of DBS. Let’s say you can identify individual developers by User ID. A Limiting Agent for each developer can be created dynamically in JAL with a maximum value of 2, and a “weight” represented by the appropriate DBS Descriptor. That is all that is needed to have a totally automated solution:
- DBS manages the Global Pool.
- DBS enforces the Work Group values (6 in this case).
- JLS enforces the individual maximum (2 in this case).
The structure for using JLS to extend DBS Work Groups will look like this:
When combining DBS and JLS to control tape drive usage remember the following rules:
- Do the overall tape usage control with DBS.
- Do the next level of segmentation with DBS Work Groups.
- If you need finer granularity use JLS with the new DBS Descriptors.
Always subordinate JLS to DBS to get the best results.
Relating DBS Descriptors to $UNIT Descriptors
The new DBS Descriptors and their relationship to the $UNIT Descriptors are shown in the Table of Supported Drive Pools accompanying the description of the $DBS_ drivepool Descriptors in the JAL Reference Guide. In some cases, there is not a one-to-one relationship. If there were, there would not be a need for new Descriptors. The most obvious difference is the DBS separation of Descriptors by vendor. Installations with only one vendor will see fewer differences than installations with multiple vendors.