Quick Reference for Admins (Metadata Analyzer)


The following section provides quick reference for Administrators.

To start the Metadata Server

The Metadata Server must be running before clients can access the repository. The Metadata Server installed during the DevEnterprise installation runs automatically as a service and accesses a certain database. No further action is required unless you want to change the download directory or access additional databases.

Note

For detailed information on this subject, refer to The-Metadata-Server help topic.

To log in and connect to a repository

In the Metadata Analyzer Login dialog box, the host (machine) name and port number of the Metadata Server autopopulate. Enter your administrator password. The default password is admin. You should immediately change this password to prevent admin-level access by Guests.

Note

For detailed information on this subject, refer to the Logging in and connecting to a Metadata Server topic in help.

To create a collection

Collections tell the Metadata Analyzer what information to collect and how the collection should operate when run.

Use the Collection-Setup-Worksheet to help you gather the information to set up the collection.

  1. File>New>Collection. The Collection dialog box appears.
  2. In the Name field, give the new collection entity a name.
  3. In the Type field, select from the drop-down list the collection type that applies to the type of metadata you want to collect.
  4. In the Connection field, select an existing connection name or set up a new connection by selecting New in the drop-down list. If you select New, a dialog will open so you can set up a new connection.
  5. Complete the Collection Properties box by referring to the field descriptions for the type of collection you are creating. The collection properties reflect the collection type you chose. Some properties apply to what you want to collect. Others have to do with how you want the collection to operate when run.
    • Assembler
    • CICS
    • COBOL
    • DB2
    • DDIO
    • IMS
    • JCL
    • PL/I
  6. To type or search for mainframe datasets to populate a dataset field (such as the Source Libraries field for a COBOL collection) click anywhere in that property's row. The field expands.
  7. Do either of the following:
    • Type the dataset name
    • Click Search to open the Search Datasets dialog box to search for datasets.
  8. If you chose to search for datasets, do the following:

    Note

    A high-level qualifier is required in the Filter field. Following that, the wildcard character ( * ) can be used.

    • Enter in the Search Datasets dialog box a partial dataset name in the Filter dialog box and click SearchFilter.jpg. The Found datasets box populates with datasets that match your filter.
  9. Select the datasets you want added to the Collections dialog box then click AddButton.jpg to add them to the Selected datasets box.

    Note

    AddAllButton.jpg adds all datasets to the Selected datasets box. DeleteButton.jpgremoves the selected dataset. DeleteAllButton.jpgremoves all datasets. UpButton.jpgmoves the selected dataset higher in the list. DownButton.jpgmoves the selected dataset lower in the list. When the collection is run, libraries are searched based on the order in which they appear in the Collections dialog box.

  10. Click OK. The datasets are added to the Collections dialog box.
  11. Complete the User Label field if you want to flag all entities collected by this collection. The Properties view for each entity this collection collects will have this user label.
  12. When you have completed this dialog box, click OK to save your collection. If you chose an existing connection, you may be asked to provide your connection password.

Note

During a collection, the date/time stamp for every member (program, include, job, proc) is compared to see whether updates are needed. This way, only changed members are updated, which saves collection time and keeps your information current. When a member does not have a date/time stamp we cannot be certain that the member was not changed, so we must recollect it to make sure that your information is current. If the member is a include or proc, this would also force the recollection of the associated programs and jobs. To avoid lengthening collection time by unnecessarily recollecting members that have not changed, be sure that all members in JCL, proc, source and include libraries that are part of a collection have a date/time stamp. You can use the Reset ISPF Statistics panel (3.5) to turn statistics on for an entire dataset. If this is not possible, you can instead bypass a dataset for date/time stamp checking.

Note

For detailed information on this subject, refer to the Collections topic in help.

To create a connection

The Metadata Analyzer collectors use different connection types to connect to different types of hosts. It is easiest to create a new connection when you are creating a new collection; do so, follow procedure above for creating a new collection. To create a connection separately, follow instructions below.

File>New>Connection

CONNwCALLOUTS.jpg

Note

For detailed information on this subject, refer to the Connections topic in help.

To run a collection

Because metadata may change, you should run collections on a regular basis so any changes are reflected in the repository.

  1. Do one of the following to run a collection:
    • From the Entities view, select the collection. Then, do one of the following:
      • From the Tools menu, select Run Collection.
      • Right-click the entity and select Run Collection.
    • At a Command prompt, type the following, then press Enter:

      RunCollector <Metadata Server host name> <Metadata Server port number> <collection name>

This can be done manually or by using a scheduler program to start the collection.

Before running a source, DDIO, CICS, IMS, or JCL collection, the Metadata Analyzer checks whether the source files exist. If any of the specified datasets are missing the collection is stopped and a FATAL error message is added to the Collector log. Dataset validation is not applicable to DB2 collections.

After the collection has run, check whether it ran successfully. The Collector run log is written to the console and to a text file located in a Logs directory in the User Data location specified during the installation (which is available in the About dialog box) on the machine on which the collection was run. You can also check whether it ran successfully by viewing the collection status. If a collection fails, the collection's Properties view will show Status property of Failed. Any entities collected before the failure are added to the repository, however, to collect entities that were not yet collected, the collection must be rerun and source must be downloaded (if this option was selected). First, take the steps necessary to resolve the failure, then rerun the collection.

Note

For detailed information on this subject, refer to the Running a collection topic in help.

To view the information logs

Logs show the activity of the collections, Metadata Server, collections, Metadata Analyzer, and Learner. The primary focus of these logs is to help you diagnose problems. The focus of the learner log is to determine whether programs have been successfully learned.

The Server and Collector run logs are written to the console as well as to a text file that can be opened with a text editor. All other logs are written only to a text file. The logs shows the following types of information:

  • What you have requested of the Metadata Analyzer
  • What the Metadata Analyzer was able to accomplish
  • Error information to help you correct errors

Each log message contains the following information:

  • Date/time of run
  • Logging level
  • Collection name or server machine name and port number (not applicable to the learner log)
  • Message, which may suggest actions for you to take to resolve an issue

Metadata Server logs

  • MetadataServer_portxxxx.log is a run log for the last run of the Metadata Server that is written to the console and to a text file located in a Logs directory the UserDataPath specified during the installation (which is available in the About dialog box). .
  • MetadataServer_rolling.log is a cumulative log for the month that is written to a text file only.

Collector logs

  • collectionname.log is a run log for the last run of the collector that is written to the console and to a text file located in a Logs directory on the machine on which the collection was run in the UserDataPath specified during the installation (which is available in the About dialog box).
  • Collectors_rolling.log is a cumulative log for the month that is written to a text file only.

Learner log

The text file for this log is located in a Logs directory on the machine on which the Metadata Server is running in the UserDataPath specified during the installation (which is available in the About dialog box).

Learner_portxxxx.log is a cumulative log that contains information about the state of the learning process. It gives the following status information for every 10 programs learned:

  • Number of programs learned
  • Number of programs still in the queue to be learned
  • Average time to learn a program

When applicable, the learner log also indicates when learning has been suspended and the reason.

Metadata Analyzer log

The text file for this log is located in a Logs directory on the machine the Metadata Analyzer was run on in the UserDataPath specified during the installation (which is available in the About dialog box).

  • MetadataViewer.log is a run log that contains information about the activity of the Metadata Analyzer.

Logging levels

  • FATAL: Indicates that DevEnterprisewas unable to operate, typically due to an environmental constraint or issue, or a missing source file. Processing stops.
  • ERROR: Indicates that the Metadata Analyzer was unable to accomplish the task you have requested, possibly due to incorrect or insufficient user-provided information. Processing may stop or the action in error may be retried.
  • WARN: Indicates a partially successful completion of what you requested. Processing continues and you are notified of potentially harmful situations.
  • INFO: Indicates a sharing of information between DevEnterpriseand the user. Lists information about the progress of the run, such as when the process started and completed, the completion code, number of items processed, and the time it took to process.
  • DEBUG: The most comprehensive log. Gives detailed tracing information to help you debug the run.
  • OFF: Disables logging.

Note

For detailed information on this subject, refer to the Information logs topic in help.

To determine whether collected programs have been learned

Unless COBOL and PL/I programs have been successfully learned, the repository will not contain their data items and virtual columns. Thus you will get incomplete information from searching the repository, performing impact analysis, building a structure chart, and using other Metadata Analyzer functionality. To check the learn status of a program, do one of the following:

  • Use the Search view to locate the program, select it in the Entities view, then view its Learn Status in the Properties view.
  • From the Favorites menu, select Program Learn Status – Pending or Program Learn Status – Failed.
  • Use the appropriate learn status syntax to search for programs that have the status you want to search for.
  • View the Learner log.

If a program has a Learn status of Failed, the conditions leading to the failure must be resolved so the program will be learned when the collection is rerun.

Note

For detailed information on this subject, refer to the Learning Process topic in help.

To merge entities

Program entities can be collected by more than one collector type. Each collector type collects metadata specific to that collector type. In order to provide complete and correct relationship, charting, and impact analysis information, you need to merge program entities collected from different collector types that are logically the same program. DevEnterprise provides both an automated and a manual process to accomplish this task. The automated process is described below.

image2021-3-2_14-53-50.png

  1. From the Tools menu, select Merge Rule Manager. The Merge Rule Manager dialog box appears and displays information about existing merge rules.
  2. To search the database for merge rule candidates, click Suggest. The New Merge Rule Suggestions dialog box appears. Complete the fields as described below.
    image2021-3-2_14-54-47.png
  3. The Programs Matched column shows the number of programs found to have the same source library/load library combination. The Source Library and Load Library columns shows the source library and load library that the programs have in common. Click any column header to sort the table by that column.
    Merge rules are processed automatically when they are created, modified, or after a collection has been run. After the merge rule is processed, a new, merged entity is created that has the combined properties and relationships of the two entities that were merged. As new entities are added to the repository, they are compared to these rules and are automatically merged when applicable. The original entities are deleted from the repository.
  4. To check whether a merge rule has completed processing, from the Favorites menu, select Sample – Merge Rules Completed. (You may need to select Favorites>More to find the favorite.) The Entities view populates with information about merge rules that have a status of Ended, which means that the merge rule has finished processing.
  5. If the merge rule does not appear in the Entities view, from the Favorites menu, select Sample – Merge Rules Not Completed. The Entities view populates with information about merge rules that have not finished processing, and will show one of the following statuses:
    • Enabled – the merge rule has been created/modified and has yet been queued for processing
    • Pending – the merge rule is waiting to be processed
    • In Progress – the merge rule is currently being processed
    • Disabled – you have selected the Disable Merge Rule checkbox on the Merge Rule dialog and the merge rule will not be processed

Note

For more detailed information on this subject and for information on manually merging entities, refer to the Merging entities topic in help.

Create a Repository Analysis Report

The Repository Analysis Report allows an admin to get a quick assessment of the Metadata Repository to identify any problems, such as missing copybooks, libraries without timestamps, or the need to create additional merge rules.

  1. Run the runRepositoryAnalysis.bat file (located at C:\Program Files\Compuware\DevEnterprise by default).
  2. View the report, which is named RepositoryAnalysis_yyyymmdd_hhmmss.txt (located in the Reports directory in the User Data location specified during the installation).

Handling dynamic program names

You can resolve a dynamic program name to the correct program using the methods described below. For example, if program calls go to the variable name "WS-DYNPGM", you can resolve WS-DYNPGM to a static entity.

Administrators can take several actions regarding dynamic program names:

  • Let the learning process try to resolve dynamically called subprograms. When the learning process runs, it attempts to resolve dynamically called subprograms. If it can find which value moved into the dynamic program name, it will create a program and create a uses relationship from the first program to the called program. If a dynamic program name cannot be resolved, a warning message appears in the Learner log indicating that the dynamic program name could not be resolved.
  • Permanently convert the dynamic program name to a static name. Use this method if you believe that the learning process may not be able to resolve the dynamic entity, for example, if the program name were kept outside of the program in the calling program or database.

To convert dynamic program names

Use the mask method with COBOL or PL/I collections to convert all dynamic program names that use a standard prefix or suffix (or both) to a static program name during collection.

  • Set Convert Dynamic to Yes.
  • Enter the mask in the Convert Dynamic Mask field. Enter each mask on a separate line.

Refer to the table below for the format to use for masks.

To do this:

Add this mask:

Delete a prefix from the dynamic program name (such as deleting WS- from WS-CWXTDATE)

PREFIX*,*
(for example, WS-*,*)

Delete a suffix from the dynamic program name (such as deleting -END from CWXTDATE-END)

*SUFFIX,*
(for example, *-END,*)

Delete both a prefix and a suffix from the dynamic program name (such as deleting WS- and -END from WS-CWXTDATE-END)

PREFIX*SUFFIX,*
(for example, WS-*-END,*)

Add a prefix to the dynamic program name (such as adding CW to XTDATE)

*,PREFIX*
(for example, *,CW*)

Add a suffix to the dynamic program name (such as adding DATE to CWXT)

*,*SUFFIX
(for example, *,*DATE)

Add both a prefix and a suffix to the dynamic program name (such as adding CW and DATE to XT)

*,PREFIX*SUFFIX
(for example, *,CW*DATE)

where PREFIX is the standard prefix you want to delete from or add to the program name (for example, "WS-" in "WS-CWXTDATE")
 where * is the actual program name (for example, CWXTDATE)
 where SUFFIX is the standard suffix you want to delete from or add to the program name (for example, "-END" in "CWXTDATE-END")

The information before the comma (,) is the dynamic program name (for example, WS-CWXTDATE). The information after the comma tells what you want the dynamic name converted to (for example, CWXTDATE).

Bypassing Members During Program Collection

Use the X2JM9DSN exit allows users to indicate copybooks/includes that should not be recollected despite the fact that they are missing a date/time stamp.

For information on installing and using X2JM9DSN, review the source in SAMPLIB.

Note

Be prudent in your choice of copybooks/includes you add to X2JM9DSN. If they change while they are listed in X2JM9DSN, the Metadata Analyzer will not recollect them.

To change the administrator password

Tools>Change Password

Note

For detailed information on this subject, refer to the Changing the administrator password topic in help.

To delete entities from the repository

  1. Select the entities you want to delete in the Entities view.
  2. Do one of the following:
    • Press Delete.
    • From the Edit menu, select Delete.
    • Right-click and select Delete.

Note

For detailed information on this subject, refer to the Deleting entities from the repository topic in help.

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*