Posts

Setting session timeout for ADFS and Lawson

It is recommended that the session timeout for AD FS and Lawson be synchronized. You can modify the session timeout in Lawson for Lawson in ssoconfig option 1. To modify the session timeout for AD FS, set the TokenLifetime for your relying party trust using the command below.

Lawson Portal – Getting Started as a New User

When first using Lawson portal, it can be a bit overwhelming with all the buttons and unfamiliar UI. The first change I always make is changing the buttons to display text instead of icons.

 

First go to Preferences >> User Options

 

Under “Toolbar Button Display” select Text >> Apply and OK

Once you’ve applied the changes, your icons will change from this:

To this:

This is especially helpful for users with higher tier access as you don’t want to accidentally click delete or change a form when you didn’t mean to!

 

Hope this simple tip was helpful!

Lawson Portal Helpful User Guide

Over the years I’ve assisted users debug issues in Lawson portal.  Through this time, I’ve noticed a pattern of users not utilizing fundamental functions of portal.  I’ll share to you a few of them today.

 

New Tab Button:

Instead of using up more of your PC’s memory, Lawson has its own tab system that users often miss. This will let you switch between multiple forms faster and easier as shown below:

 

Search box keywords:

 You might be used to typing in form numbers as shown below, but did you know you can type in keywords as well?

Don’t know a form name to General Ledger? Type in the word General Ledger and click search:

This is great for newer and veteran users taking on newer tasks.

 

Form Help:

Finally, if you want to get more information about the form you’re currently using. Go to the form and click Help >> Form Help.

 

Hope these tips helped you in your journey in mastering Lawson Portal.

How to set ESS access from an external domain and for custom forms

Step 1: Login to Infor Security Services (ISS)

Step 2: Goto SSO >> Manage Domains and click edit button as shown:

We are simply verifying the names of the displayed XML files:

Step 3: On the LSF server, go to the %LAWDIR%\security\domainauth\EXTERNAL directory to edit that XML

 

Step 4: Add an entry to an existing tree or create a new one:

 

 

That’s all that’s to it.

Compiling an entire product line in Lawson Interface Desktop and speeding up the process

Often when you install a patch into Lawson, you want to ensure all forms run the next day without issues. This is especially important with larger patches that affect dozens of Lawson forms and or when a new db dictionary is created.

 

This is easily done through the cobcmp command.

 

When you run the cobcmp command, it will run through a list of all Lawson forms.

 

Once this is done, you can check its process by using this command: qstatus | head -5

This command will show you the amount of forms left to compile.

 

If you want to speed up this process, you can increase the amount of compile jobs running.

In LID this is done with this command: qcontrol -jlocal,4

 

This increases your job count to 4 (default is 2)

If your server only has 2 cores, it is recommended to keep it at the default 2.

 

I’d only increase the compile job count to one less than the maximum amount of cores your server has so you don’t overload other processes.

 

Once done compiling, it is recommended to change the job count back to 2, or: qcontrol -jlocal,2

 

Hope this was of help!

Getting the version of important Lawson interfaces

Interface versions are important when debugging an issue. This way you can check whether or not a patch exists that addresses the issue or if Infor is assisting you with the problem and needs a starting point.

 

Some of the most common ones needed are:

  • Lawson System Foundation (ILSF)
  • Security Jar files (secLS)
  • Infor Landmark (ILMRK)
  • Distributed Security Package (DSP)

 

Its quite simple as long as you have access to LID, Infor Security Services (cloud) and Landmark Command Line (cloud) access.

 

Starting with LID:

  • For Lawson System Foundation (ILSF) version run command: univver lawsec.jar
  • For Security Jar files (secLS), change directory to %GENDIR%/java/thirdparty and run this command: univver secLS.jar
  • For Distributed Security Package (DSP), change directory to %LAWDIR%/lss/system
    • Type command: lashow install.log
    • This will display the inside of the file, you can copy the version or export as needed.

Infor Security Services (ISS):

  • Click the lower case “i” on the upper right-hand corner of the page

Landmark Command Line (ILMRK):

  • Simply type univver -V

 

That’s all there is to it!

 

Using jobdef to distribute job files in Lawson

In LID, type in jobdef, select a user and job name, then F6 >> B. Report Distribution

As you can see, this job isn’t being distributed beyond the prt file that is produced in the users print manager:

You have three options to None, direct to a printer, or to a distribution group:

If we use a distribution group, a distribution group consists of a list of users which can all have different printers setup under them within this group:

The printer itself has to be defined in printer definitions (prtdef) as well as setting up in the distribution list group definitions (dstlistgrpdef)

 

In this example, if the job definition (jobdef) was set to BENEFIT distribution group and ran, it would auto print to PRINTER123 under the lawson user.  All other users are not set so it wouldn’t print to them. That’s all there is to it!

 

How to Resolve Workunits Sitting in Ready Status

Problem: Workunits for Infor Process Automation are sitting in Ready Status and not processing.

Solution:
1.  Via LmrkGrid (Grid Management), determine how many workunits may simultaneously process in your system. (see attached “DeterminingSimultaneous.docx”)

  1. Determine how many workunits are currently in a processing state.

NOTE:  Please refer to attachment “DeterminingSimultaneous.docx” for instructions on determining simultaneous processes.  In case you will need to engage support, you should screenshot this information to provide when you open the support incident.

 

– Command to count records in Ready Status:  dbcount <DATAAREA> -f Status=\”1\” PfiWorkunit

– Command to count records in Processing Status:  dbcount <DATAAREA> -f Status=\”2\” PfiWorkunit

– Command to count records in Completed Status:  dbcount <DATAAREA> -f Status=\”4\” PfiWorkunit

 

It is a good idea to monitor and take counts of these records periodically. Are the number of workunits in Ready status growing? Are the number of workunits in Completed status growing? Is the number of workunits in Processing status equal to the maximum number workunits that can simultaneously process?

NOTE: If the number of workunits in Ready Status is growing and the number of workunits in completed status is not, then either:

  1. You have workunits that processing for a very long time holding up the system; use the Grid Management UI to determine which workunits are processing so long and determine if those are stuck in a loop; or if they are just processing normally large jobs. Consider cancelling the long running workunits, and scheduling them to run in off business hours.
  2. If you are on Landmark 10.1.0.x, there was a bug in this version of Landmark that periodically caused Async to stop picking up new workunits. This issue was resolved by a re-write of Asnyc and LPA nodes in 10.1.1.x Landmark versions. If you are on Landmark 10.1.0.x you should restart the Async Node, and the IPA node.

NOTE: The workunits that were already queued to an LPA node will not automatically start back up;  the workunit polling frequency (default 30 minutes) will need to trigger before they are requeued to a new LPA node.

Troubleshooting: Rich Client workunit log too large to extract

Depending on the process run, Rich Client workunit logs can grow extremely large. So large in fact that you may not be able to extract the full log from the Landmark Rich Client.

If a workunit log grows too large you may not be able to extract the full log from the Landmark Rich Client,  in this case you can use the following command to extract the log from a Landmark Command Prompt.

What Is the Landmark Command Prompt?
The Landmark administrator will perform many tasks from a command prompt. When you are instructed to use a Landmark command prompt, you should be sure you are in the Landmark Environment that you want to use, and that all environment variables are set correctly.

Setting Environment Variables
Before you startBefore you perform this procedure, be sure that the /etc/lawson/environment/environmentName/config.sh file contains the appropriate settings for environment variables.

 

Use this procedure to export the appropriate environment variables for your Landmark Environment before issuing commands from a Landmark command prompt.

To set the Landmark Environment variables

At a command prompt, type

. cv landmark-env-name

Where landmark-env-name is the name of the Landmark Environment.

Resolution:

dbexport -C -f “PfiWorkunit=####” -o . -n <dataarea> PfiWorkunit

Example:

dbexport -C -f “PfiWorkunit=7000” -o . -n prod PfiWorkunit

IPA Sample Flows and Scripts

Infor KB article 1667935 offers a list of sample flows and scripts used to carry out certain needs in Lawson.

Download Sample Flows Scripts

Preamble

During the course of providing support for Infor Process Automation, I periodically find the need to create a process flow or short script to satisfy some need that I have. This KB article is a place you can look for some example flows and scripts. The content here is built as examples of techniques and are not intended for executing within a Production environment without modification.

FP.pl – (perl script)

This perl script accepts a file as input, such as a workunit.log that is too large to open in a text editor, and breaks it into smaller pieces. Execute from a command window:

C:\>perl FP.pl -f C:\folder\filename.ext -n 100000

 

This would create new files C:\folder\filename1.ext, C:\folder\filename2.ext each with 100,000 lines.

 

LogReporter.pl – (perl script)

This perl script is used for determining where a process flow is spending all of it’s time processing. It is packaged in a zip file, when unzipping this the LogReporter.pl should be inside of a folder that contains a /lib folder which contains a required perl module. To run the perl script save off a workunit log that was captured at “Workunit Only” log level. Then from a command window, cd to the directory where LogReporter.pl exists and execute:

perl LogReporter.pl -f C:\folder\workunit.log

 

This will create two csv files, one being a report of where the flow spent all of it’s time.

 

IPABP-FileAccess-IntermittentWrite.lpd

This is a sample process flow which provides three examples that achieve the same result. The first technique stores all data in a message builder node, then writes to a file once. This technique is only possible if the process flow does not stop at a user action node or wait node, which would wipe current variable values. The second example makes a write to append the file once per record returned. I recommend avoiding this strategy as there is overhead to opening a file, writing, and closing the file repeatedly. The third example shows how to store data for X records, then periodically writing the data to a file. This is the recommended technique as it minimizes the memory footprint stored in memory, and eliminates the need to write to the file for every record.

 

IPABP-CSVFile-SQLCMD.lpd

This sample flow makes use of SQL Server’s Database utility to obtain a CSV file directly from the database. A technique like this can save significant resources and time to build a csv file from within a flow. The command we run is:

SQLCMD -S . -d DATABASENAME -U username -P password -s, -W -Q “SELECT PFIWORKUNIT, FLOWDEFINITION, FLOWVERSION, WORKTITLE, PFIFILTERKEY, FILTERVALUE FROM [DATABASENAME].[SCHEMA].[PFIWORKUNIT]” | findstr /V /C:”-” /B >

Microsoft sqlcmd utility documentation: https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-2017

TIP: Adding -E, instead of -U -P, will cause the command to use a trusted connection and pass windows credentials to the database. Running this command from within a flow means the command is actually going to execute as the user that Landmark is running as. It will pass those credentials to the Database, if you add a Windows Authentication login for that user you won’t have to pass the user and password in plain text from a process flow.

 

TIP: The -h flag can be used to eliminate header row (-h -1) would be no headers. (-h 100) would print the headers every 100 rows of data.

 

IPABP-WorkunitGovernor

Periodically we see a flow that is deemed critical to the business, and there are also a few less important process flows that can take alot of time to run. In this scenario it is possible to trigger several less important flows that will occupy all available workunit threads. This causes important flows to wait for an available thread to process on. This is a sample of a technique that you could use to limit the number of less important flows that can run at the same time.

 

The flow is a simple Landmark transaction to count how many workunits with the same name are currently processing. If that number is higher than the value we set, the flow will wait for a few minutes then get another count. We are leveraging the wait node, which parks the workunit and makes its workunit thread available to other flows.

 

IPABP-JobSubmit and IPABP-JobWait

NOTE: Due to a bug in Landmark Runtime Environment, you must be above Landmark 10.1.1.10 for this flow to work

 

Lawson System Foundation deprecated the old cgi-lawson/jobrun.exe command in LSF10, and replaced all these cgi executables with lawson-ios/action servlets. I developed this example flow as a way to submit a job and have process automation wait for the job to finish before moving on. The benefit of this technique is it does not consume a workunit thread while the job is executing in LSF.

 

It works like this: You have a main flow (IPABP-JobSubmit) which uses a trigger node to launch IPABP-JobWait. When it triggers the JobWait flow it passes all the information that is required to execute the job, then JobSubmit immediately goes to a User Action node which drops it from processing and returns the workunit thread. JobWait submits the job then cycles through a WAIT node, periodically checking on the status of the job. When the job completes, JobWait will take an action on the Job Submit which wakes that flow up to continue processing.

 

IPABP-Multithread-S3Query

Before we found the performance improvements of using the Java Interpreter available in Landmark 10.1.1.48 there were times we needed to have some large running flows execute faster. This flow is an example where a S3 Query returned alot of records and we needed to perform another action for every single record that was returned. To improve the processing time we broke this up so that the S3 Query was executed simultaneously in multiple workunits, instead of a single workunit.

 

This sample collects a set of KEYS for the table we are querying inside LSF; then triggers a new workunit to handle that group of records.

 

IPABP-BackDatedProxyUserApproval & IPABP-ResetWorkunitsForNewProxies

In Infor Process Automation 10.1.1.x we introduced some new features. One of the new features is the concept of creating a Proxy User; this allows a Process Flow Approver to create a proxy that grants another user access to fill in for them while they are away. When this feature was developed, to avoid problems with pre-existing code we could not safely grant the proxy user access to pre-existing work items.

 

So if a user is assigned 2 work items on Wednesday, then sets up a proxy for another user to cover for them. Even if you back date the proxy access to Tuesday, the new proxy user will not see the items created Wednesday. So I came up with the following concept:

  1. If I can design my approval flow with this limitation in mind, I can have an action on every User Action node that simply leaves the current UserAction node then comes right back to the same UserAction
  2. I can design another flow to find all workunits that are waiting on a useraction, and have that flow execute this special action which would cause those flows to leave the current UA and return

Why? Because this means after I create a proxy I can run my Proxy Reset flow to have a flow leave and return thus creating new Work Items which would include the new proxy user.

 

The method I am describing works, but it make not be for everyone. In this instance I am stealing the Timeout action so that regular users can not see my “Reset Action”, but if you already use a timeout addition modification to this concept would likely be required.

 

IPABP-LargeHTMLContentInVariable

There have been several times where I have encountered a client looking to simply build a HTML content display exec to show a header record, and all detail lines. Like REQHEADER and REQDETAIL, however since the database is limited to 2048 characters for a string variable. If the flow has multiple levels of approval, after the flow stops at the first variable, the perfectly built HTML is now truncated down to 2048.

 

NOTE: If you trigger your flow from a File Channel, the Input Data variable is already populated and this approach won’t work for you. Essentially this flow builds the HTML, then makes a Landmark Transaction to create a PfiWorkunitInputData record to store the HTML, then the flow uses a short term wait node. The purpose of the wait node is so that when the workunit restarts it loads PfiWorkunitInputData into the variable where it can now be used anywhere else in the flow.

 

IPABP-LogLevelAdjuster (Windows Landmark)

This process flow is designed to update all Process Flow Definitions,  it sets the Workunit log level to “Workunit Only,  it sets the CancelCheckFrequency to “Intermittent @ 30 seconds”, it disables CaptureActivityStartandEnd times.   For efficiency sake this is the recommended settings for running a flow inside of a Production environment unless you are specifically troubleshooting an issue with the flow.

 

NOTE:   This flow contains 2 variables in the start node that must be updated before implementing it.   The “BaseLmrkDir” variable needs to be pointed to the absolute path of the Landmark directory where the “enter.cmd” command resides.  You must also use \\ instead of \ as we are storing the directory in a javascript string.   For example:   D:\\lmrkprod  if there are two directories be sure to use double backslashes for all directories  D:\\Landmark\\prod.

 

Also, you should modify the email address to your own email address.

 

UNIX – You should be able to modify this flow fairly easy to work for UNIX as well.   You simply need to modify the system command node so that it can run a landmark command.   cd /landmarkdirectory && enter.sh && listdataareas.  Everything else is landmark based.