Grav is an open source flat-file CMS that we use for client documentation.

Requirements:

  1. Web Server (Apache, Nginx, LiteSpeed, Lightly, IIS, etc.)
  2. PHP 5.5.9 or higher

Advantages of using a Grav CMS for documentation:

  1. SEARCH bar: Can search entire collection of documentations for a search term instead of going through each doc file individually
  2. Not reliant on database: Flat-file content management system means that the data is stored in folders rather than a database = fast loading
  3. Easy to maintain since all data in local folders
  4. Simple HTML format written with Markdown syntax (https://nogalis.com/docr/how-to-use-docr to learn more about Markdown)
  5. Many useful plugins freely available (https://www.nogalis.com/2018/07/27/grav-a-modern-flat-file-cms for more information on plugins)
  6. User authentication and custom user privileges

Documentation Sections


  1. Introduction – Screenshot of the IPA flow and a short description about what it does
  2. General Data Flow – Box diagram showing step-by-step what is happening in the flow. Also accompanied by a paragraph explanation
  3. Components Table – A table that lists all input/output files, databases, scripts, triggers, jobs, and configuration variables
  4. Dependencies – The files, triggers, jobs, or SQL/server connections required for the flow to function
  5. Scheduling and Triggers – How/when the flow gets triggered
  6. Recovery – This section provides instructions on how to manually execute the steps of the flow if it fails. Each step here represents a box in the General Data Flow box diagram.
  7. Archiving – Table showing archived files and their archived locations
  8. Notifications – Details of the notifications sent by the flow
  9. Troubleshooting – General tips for troubleshooting
  10. Maintenance – Maintenance needed by the flow (usually involves cleanup of archived/done files)
  11. Security – Overview of user permissions and connection credentials needed for the flow
  12. Key Contacts – The person to contact for questions about the flow/documentation
  13. Related Files – File uploads related to the flow (these are able to be viewed/downloaded from the page)

For full examples, please visit https://nogalis.com/docr/demo_documentation

Nogalis offers monthly server healthchecks that informs clients of any current or potential problems in Lawson for both PROD and TEST servers. The healthcheck covers error logs, smoke tests, patches, LID functionality, portal check, database integrity, and many other sections (26 total) which will be briefly outlined here.

You can visit https://nogalis.com/docr/demo_hc to view a demo healthcheck for yourself.

  1. Summary

    This is arguably the most important section of the monthly healthcheck. If there is one section to read, it should probably be this one as it brings to attention the most pressing needs of all the other sections. There is also a handy letter grade assigned each month so you can quickly keep track of the overall health of your server!
  2. Recommendations

    This section has all the combined recommendations from the other sections of the healthcheck. It is divided into three levels of urgency so the client can decide what to focus in on.

  3. System Layout

    System hardware and OS information.

  4. Component Versions

    Lawson components version information. If server versions start to fall significantly behind latest versions, recommendations are made.

  5. Application Versions

    Lawson application version information

  6. Programs with Errors

    Any .err files found in the Lawson directory are pointed out in this section. As with any section, further investigation into any issue can be requested by sending an email to the Key Contact.

  7. Custom Programs

    List of custom Z/Y/X programs

  8. CPU Performance

    Tested while idle and under duress. Just a very rough idea of the CPU performance.

  9. Disks Report

    Report on the free space available of the different drives on the server. We make sure to point out in the Summary and Recommendations sections when a drive is getting dangerously low on free space.

  10. Purge Recommendations

    The disks report is followed by purge recommendations where we point out certain things that could be deleted to free up some space.

  11. Java

    Java settings information/recommendations and a screenshot of the jconsole.

  12. licsta

    Summary table of current licenses (viewable in LID)

  13. Error Logs

    Every month we grab these 6 important logs to analyze them for current errors. These errors are pointed out in this section so that the client can determine if they are worth investigating or not.

  14. Smoke Tests and Component Testing

    Various smoke tests with LID, Lawson portal, and various URL tests

  15. Recurring Job Listing

    Simple table listing of recurring jobs. One of the sneaky benefits of having all this information in one healthcheck page is that it allows for a quick search for any terms using the DOCR search bar on the left panel.

  16. Waiting Jobs

    List of waiting jobs

  17. Database/Table Review and Sizing

    This section displays the 10 tables with the largest number of rows for each of PROD/LOGAN/GEN. A quick overview of this section might lead to decisions to purge certain tables for example.

  18. Database Integrity Check

    Summary of database integrity check performed through LID. Any errors here are pointed out in Summary/Recommendations.

  19. Printers

    List of printers and their commands

  20. Work Directory Review

    Overview of Lawson work folder

  21. Print Directory Review

    The Lawson print directory list by user. This information could possibly be used to purge old users/records.

  22. Security Analysis

    Security analysis performed using LSFIQ
    (1-click Lawson security audit and reporting tool: see https://www.nogalis.com/nogalis-products/lawson-security-with-lsfiq/ for more info.)

  23. Patches Installed Report

    List of latest patches installed on this server.

  24. Source Versions Report

    The source versions report is in a zipped file available to download in the Related Files section.

  25. Related Files

    Any related files related to the healthcheck are in this section where they are available to view or download.

  26. Key Contacts

    If you have any questions about the healthchecks or would like to request an investigation into some error discovered by the healthcheck, please contact anyone in the Key Contacts section with your questions.

Often our Lawson print queues get cluttered and out of hand.  Lawson’s deljobhst command is a really great tool for cleaning up your batch jobs.  It can clear the clutter from your user’s print managers, as well as free up some space on your server.  Run this command in LID.

For each of these commands, you must provide a “ToDate” in MMDDYY format.  So, if you give it an end date of 033119, for instance, you would delete all the selected job history up to March 31, 2019.

You also have the option of providing a user’s account so that you just perform the delete for a specific user.  There is also a from date option that allows you to manage job history for a specific date range.

We recommend setting up some of these commands on a schedule to keep your Lawson server happy & healthy.

Here is a summary of the command:

The -w option will delete all waiting jobs, so jobs in recovery and jobs with Invalid Parameters.  After you run this, there will not be any jobs listed in the waiting queue for the specified user (or all users) up to the specified run date.

The -c option deletes all completed jobs.  This is a great way to clean up user’s job schedule print manager lists.  This action removes the data from the QUEUEDJOB table.  It does not remove print files.

The -r option removes all the print files associated with batch jobs, that were created up to the specified to date.  This will help keep your server from getting too cluttered.  Make sure you back up your print directory, especially if you have a retention policy at your organization.  If you run the command so that it deletes ALL print files (so delete everything up to today), it will delete your entire print directory.  Don’t panic!  It’ll be created the next time a user runs a batch job.

The JSON Converter node can be used to build a JSON object from CSV or XML, or to convert a JSON object to XML or CSV.

Under the input tab on the Properties, the input could be output from some other node, a variable, or a text string.

The output from the converter node can be used to saved to a file, in a data iterator, or in other reader nodes in your flow.

I was doing some work in Landmark and ran across an issue where my LPA node wouldn’t start in the grid.  I looked at the logs and saw the error “There is not one Actor for this identity: <IDENTITY>”.  Issues like this often present themselves when trying to log into Rich Client also.  In this case, it came back with a “logon failure” message.  When you come across issues like this, and you can’t get into Rich Client, the next step is to check the database.  For an error referring to actors and identities, the first table to look at would be IDENTITYACTOR in the Landmark GEN database.  In this case, I discovered that the record for the IDENTITY mentioned in the error had a DELETEFLAG that was populated with the UNIQUEID (meaning that it had been deleted).  I updated the record and set the DELETEFLAG = 0, rebooted my server, and the LPA node started right up.

The JSON Builder node can be used to build a JSON object, which you can use later in your flow for reading or sending out to a server using a web call.

Under the input tab on the Properties, the input could be output from some other node, a variable, or a text string.

The output of a JSON builder can be used to send a JSON web call, or it can be read similarly to the JSON parser output.

The JSON Parser node can be used to parse JSON data, either from a local file or from a response from a Web API.  The steps are very similar to getting XML data from a web API.

Under the input tab on the Properties, the input could be output from some other node, such as a file access node or Web Run result.

For the output, if you provide a sample file with a JSON response, that is an easy way to get the syntax for the variables coming across in the JSON response.  You can click “Set Variable” to see the syntax, and you can select “Export Variables” to get a file with the syntax for all variables in the sample file.

Use this output syntax to set variable values to use later in your flow.

What happened to User Fields in CSF?

Some of them are still there in the Business Class (a file with functions built into it) none of them are on the forms.

Some forms do have tabs for user fields, there are no fields there and the fields need to be added using Configuration Console.

Attributes which have always been user fields on steroids, also go away in CSF.  Using Configuration Console any kind of custom field can be added.

In conclusion, user fields are not gone, you can still have as many as you would like, they just need to be added to the forms and sometimes to the business class using Configuration Console.  Keep in mind that the system updates every 3rd Saturday so it is best to add only those that you really need.

Many times, you will have a need to run environment utilities (such as importdb) using a system command node, or a batch job such as IMDBB.  If you are getting security violations when you attempt to use these tools, you will need to elevate the Lawson Security privileges of the user running IPA.  The reason for this is that the system user running the IPA service is who actually is running those system commands.  Windows took away the ability to do a “run as”, so there is no way to bypass that user.

If you don’t know which user is running IPA, you can find out by executing a “whoami” command in a system command node on your IPA server.

Next you need to find this user in Lawson Security so you can elevate his privileges.  Open up Lawson Security and go to Manage Identities.  Search for the service account that you discovered in your “whoami” command.

Make not of the RMID and use that to search for the user under User Management.  Set that user’s “CheckLS” to “YES” and give him the roles required to allow access to the necessary environment utilities.

It is ridiculously difficult and ambiguous how you’re supposed to get the Oracle ODBC driver installed on your Windows machine. It’s certainly not a simple task. Well, since I figured it out today I thought I would document it.

First you need to download all three of these files from oracle
1) instantclient-basic-win-x86-64-11.1.0.7.0
2) instantclient-odbc-win-x86-64-11.1.0.7.0
3) instantclient-sdk-win-x86-64-11.1.0.7.0
Extract the contents of all the above into a single directory called D:\Oracle\Ora11\
That directory doesn’t exist and you’ll have to create it first.
Now with all the files copied in there, create the following windows environment variables:
1) OCI_LIB64 = C:\Oracle\Ora11
2) ORACLE_HOME = C:\Oracle\Ora11
3) TNS_ADMIN = C:\Oracle\Ora11
Also, add “C:\Oracle\Ora11” to your PATH variable.
At this point you MUST reboot. No other way around it. Simply restarting the explorer task won’t do.
After the reboot, create the following directory:  C:\Oracle\Ora11\Network\Admin
Place your tnsnames.ora file inside this new directory.
Open a cmd prompt as administrator.
cd to the C:\Oracle\Ora11
On the command prompt type the following command:
odbc_install.exe
You should receive a message that says: “Oracle ODBC Driver is installed successfully.”
The new ODBC driver is only in the ODBC 64-bit data sources.
You can now add a new one like so:
Just remember that your tnsnames.ora file must contain the service you’re trying to connect to.