Posts

How to resolve the problem: LSF submitted processes aren’t triggering on IPA Server

Example: Submitted requisitions aren’t triggering on IPA server via reqapproval process and are not showing in Rich Client.

 

  1. Go to LSF application server, start LID and type: pfserv ping all

    Make sure Event Manager is running.
  2. When Event Manager is NOT running:
    Let’s restart it but make sure we can see the java process with –classpath in their command line close.

Now in LID on LSF server, run these stop commands IN THIS ORDER:

Verify the java.exe –classpath processes closed, else manually end them.

 

Now start them up IN THIS ORDER:

Now run a pfserv ping all to see if Event Manager is running.

 

Go to Rich Client and check Live Workunits to verify that hung processes are being picked up:

 

And you’re done!  Monitor and make sure users aren’t having any issues.

 

How to Resolve Workunits Sitting in Ready Status

Problem: Workunits for Infor Process Automation are sitting in Ready Status and not processing.

Solution:
1.  Via LmrkGrid (Grid Management), determine how many workunits may simultaneously process in your system. (see attached “DeterminingSimultaneous.docx”)

  1. Determine how many workunits are currently in a processing state.

NOTE:  Please refer to attachment “DeterminingSimultaneous.docx” for instructions on determining simultaneous processes.  In case you will need to engage support, you should screenshot this information to provide when you open the support incident.

 

– Command to count records in Ready Status:  dbcount <DATAAREA> -f Status=\”1\” PfiWorkunit

– Command to count records in Processing Status:  dbcount <DATAAREA> -f Status=\”2\” PfiWorkunit

– Command to count records in Completed Status:  dbcount <DATAAREA> -f Status=\”4\” PfiWorkunit

 

It is a good idea to monitor and take counts of these records periodically. Are the number of workunits in Ready status growing? Are the number of workunits in Completed status growing? Is the number of workunits in Processing status equal to the maximum number workunits that can simultaneously process?

NOTE: If the number of workunits in Ready Status is growing and the number of workunits in completed status is not, then either:

  1. You have workunits that processing for a very long time holding up the system; use the Grid Management UI to determine which workunits are processing so long and determine if those are stuck in a loop; or if they are just processing normally large jobs. Consider cancelling the long running workunits, and scheduling them to run in off business hours.
  2. If you are on Landmark 10.1.0.x, there was a bug in this version of Landmark that periodically caused Async to stop picking up new workunits. This issue was resolved by a re-write of Asnyc and LPA nodes in 10.1.1.x Landmark versions. If you are on Landmark 10.1.0.x you should restart the Async Node, and the IPA node.

NOTE: The workunits that were already queued to an LPA node will not automatically start back up;  the workunit polling frequency (default 30 minutes) will need to trigger before they are requeued to a new LPA node.

IPA Sample Flows and Scripts

Infor KB article 1667935 offers a list of sample flows and scripts used to carry out certain needs in Lawson.

Download Sample Flows Scripts

Preamble

During the course of providing support for Infor Process Automation, I periodically find the need to create a process flow or short script to satisfy some need that I have. This KB article is a place you can look for some example flows and scripts. The content here is built as examples of techniques and are not intended for executing within a Production environment without modification.

FP.pl – (perl script)

This perl script accepts a file as input, such as a workunit.log that is too large to open in a text editor, and breaks it into smaller pieces. Execute from a command window:

C:\>perl FP.pl -f C:\folder\filename.ext -n 100000

 

This would create new files C:\folder\filename1.ext, C:\folder\filename2.ext each with 100,000 lines.

 

LogReporter.pl – (perl script)

This perl script is used for determining where a process flow is spending all of it’s time processing. It is packaged in a zip file, when unzipping this the LogReporter.pl should be inside of a folder that contains a /lib folder which contains a required perl module. To run the perl script save off a workunit log that was captured at “Workunit Only” log level. Then from a command window, cd to the directory where LogReporter.pl exists and execute:

perl LogReporter.pl -f C:\folder\workunit.log

 

This will create two csv files, one being a report of where the flow spent all of it’s time.

 

IPABP-FileAccess-IntermittentWrite.lpd

This is a sample process flow which provides three examples that achieve the same result. The first technique stores all data in a message builder node, then writes to a file once. This technique is only possible if the process flow does not stop at a user action node or wait node, which would wipe current variable values. The second example makes a write to append the file once per record returned. I recommend avoiding this strategy as there is overhead to opening a file, writing, and closing the file repeatedly. The third example shows how to store data for X records, then periodically writing the data to a file. This is the recommended technique as it minimizes the memory footprint stored in memory, and eliminates the need to write to the file for every record.

 

IPABP-CSVFile-SQLCMD.lpd

This sample flow makes use of SQL Server’s Database utility to obtain a CSV file directly from the database. A technique like this can save significant resources and time to build a csv file from within a flow. The command we run is:

SQLCMD -S . -d DATABASENAME -U username -P password -s, -W -Q “SELECT PFIWORKUNIT, FLOWDEFINITION, FLOWVERSION, WORKTITLE, PFIFILTERKEY, FILTERVALUE FROM [DATABASENAME].[SCHEMA].[PFIWORKUNIT]” | findstr /V /C:”-” /B >

Microsoft sqlcmd utility documentation: https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-2017

TIP: Adding -E, instead of -U -P, will cause the command to use a trusted connection and pass windows credentials to the database. Running this command from within a flow means the command is actually going to execute as the user that Landmark is running as. It will pass those credentials to the Database, if you add a Windows Authentication login for that user you won’t have to pass the user and password in plain text from a process flow.

 

TIP: The -h flag can be used to eliminate header row (-h -1) would be no headers. (-h 100) would print the headers every 100 rows of data.

 

IPABP-WorkunitGovernor

Periodically we see a flow that is deemed critical to the business, and there are also a few less important process flows that can take alot of time to run. In this scenario it is possible to trigger several less important flows that will occupy all available workunit threads. This causes important flows to wait for an available thread to process on. This is a sample of a technique that you could use to limit the number of less important flows that can run at the same time.

 

The flow is a simple Landmark transaction to count how many workunits with the same name are currently processing. If that number is higher than the value we set, the flow will wait for a few minutes then get another count. We are leveraging the wait node, which parks the workunit and makes its workunit thread available to other flows.

 

IPABP-JobSubmit and IPABP-JobWait

NOTE: Due to a bug in Landmark Runtime Environment, you must be above Landmark 10.1.1.10 for this flow to work

 

Lawson System Foundation deprecated the old cgi-lawson/jobrun.exe command in LSF10, and replaced all these cgi executables with lawson-ios/action servlets. I developed this example flow as a way to submit a job and have process automation wait for the job to finish before moving on. The benefit of this technique is it does not consume a workunit thread while the job is executing in LSF.

 

It works like this: You have a main flow (IPABP-JobSubmit) which uses a trigger node to launch IPABP-JobWait. When it triggers the JobWait flow it passes all the information that is required to execute the job, then JobSubmit immediately goes to a User Action node which drops it from processing and returns the workunit thread. JobWait submits the job then cycles through a WAIT node, periodically checking on the status of the job. When the job completes, JobWait will take an action on the Job Submit which wakes that flow up to continue processing.

 

IPABP-Multithread-S3Query

Before we found the performance improvements of using the Java Interpreter available in Landmark 10.1.1.48 there were times we needed to have some large running flows execute faster. This flow is an example where a S3 Query returned alot of records and we needed to perform another action for every single record that was returned. To improve the processing time we broke this up so that the S3 Query was executed simultaneously in multiple workunits, instead of a single workunit.

 

This sample collects a set of KEYS for the table we are querying inside LSF; then triggers a new workunit to handle that group of records.

 

IPABP-BackDatedProxyUserApproval & IPABP-ResetWorkunitsForNewProxies

In Infor Process Automation 10.1.1.x we introduced some new features. One of the new features is the concept of creating a Proxy User; this allows a Process Flow Approver to create a proxy that grants another user access to fill in for them while they are away. When this feature was developed, to avoid problems with pre-existing code we could not safely grant the proxy user access to pre-existing work items.

 

So if a user is assigned 2 work items on Wednesday, then sets up a proxy for another user to cover for them. Even if you back date the proxy access to Tuesday, the new proxy user will not see the items created Wednesday. So I came up with the following concept:

  1. If I can design my approval flow with this limitation in mind, I can have an action on every User Action node that simply leaves the current UserAction node then comes right back to the same UserAction
  2. I can design another flow to find all workunits that are waiting on a useraction, and have that flow execute this special action which would cause those flows to leave the current UA and return

Why? Because this means after I create a proxy I can run my Proxy Reset flow to have a flow leave and return thus creating new Work Items which would include the new proxy user.

 

The method I am describing works, but it make not be for everyone. In this instance I am stealing the Timeout action so that regular users can not see my “Reset Action”, but if you already use a timeout addition modification to this concept would likely be required.

 

IPABP-LargeHTMLContentInVariable

There have been several times where I have encountered a client looking to simply build a HTML content display exec to show a header record, and all detail lines. Like REQHEADER and REQDETAIL, however since the database is limited to 2048 characters for a string variable. If the flow has multiple levels of approval, after the flow stops at the first variable, the perfectly built HTML is now truncated down to 2048.

 

NOTE: If you trigger your flow from a File Channel, the Input Data variable is already populated and this approach won’t work for you. Essentially this flow builds the HTML, then makes a Landmark Transaction to create a PfiWorkunitInputData record to store the HTML, then the flow uses a short term wait node. The purpose of the wait node is so that when the workunit restarts it loads PfiWorkunitInputData into the variable where it can now be used anywhere else in the flow.

 

IPABP-LogLevelAdjuster (Windows Landmark)

This process flow is designed to update all Process Flow Definitions,  it sets the Workunit log level to “Workunit Only,  it sets the CancelCheckFrequency to “Intermittent @ 30 seconds”, it disables CaptureActivityStartandEnd times.   For efficiency sake this is the recommended settings for running a flow inside of a Production environment unless you are specifically troubleshooting an issue with the flow.

 

NOTE:   This flow contains 2 variables in the start node that must be updated before implementing it.   The “BaseLmrkDir” variable needs to be pointed to the absolute path of the Landmark directory where the “enter.cmd” command resides.  You must also use \\ instead of \ as we are storing the directory in a javascript string.   For example:   D:\\lmrkprod  if there are two directories be sure to use double backslashes for all directories  D:\\Landmark\\prod.

 

Also, you should modify the email address to your own email address.

 

UNIX – You should be able to modify this flow fairly easy to work for UNIX as well.   You simply need to modify the system command node so that it can run a landmark command.   cd /landmarkdirectory && enter.sh && listdataareas.  Everything else is landmark based.

5 needs your technical documentation should address

Here at Nogalis we perform managed service for several dozen enterprise customers. Most of our customers are either using Lawson V10 on premise or Infor CloudSuite products in the cloud. Our customers vary in their level of complexity but they almost all have several custom interfaces that support the operation of their businesses. Most of these interfaces are built with IPA (Infor Process Automation) or with ION which are both Infor supported products. Here are some examples just to name a few:

  • Positive pay interfaces to banks
  • Invoice import interface
  • Vendor creation interface
  • Automated user provisioning
  • Employee benefits exports and imports
  • COBRA interface
  • Batch job automation
  • Invoice or Purchase Order Approval interfaces
  • Journal entry imports

And many more

Many of these interfaces were designed and developed years ago and have been modified several times since. Unfortunately, the same cannot be said about the documentation that was once made for them if any. In an upcoming webinar and subsequent article, I plan to discuss how to develop accurate, useful, and easy to maintain documentation. But in this article, I want to focus on the reasons why we need these documents because without knowing why we do something, we’re not likely to do it right. The reasons below serve as guidelines for our documentation:

Reason 1 – Supporting the interfaces. This is the primary reason for creating good documentation. The goal of the any documentation should be that anyone can read it from start to finish and be able to support the existing process when it breaks. Therefore, one of the first things we do for our new managed service customers is to create detailed documentation of all their interfaces on our DOCR documentation portal and give them access to it. You can see some examples here. What’s nice about storing these documents on a web portal is that there is a central place for keeping them updated that everyone can contribute to. For support reasons, we make sure that we have a troubleshooting and a recovery section in each of our interface documents.

Reason 2 – Updates and enhancements. We’re routinely asked to update and enhance existing interfaces. While we can dig through the entire code of the interface to find out what every piece does, it is always helpful if some documentation exists that has an overview of the different components of the interface.

Reason 3 – Change Control. As we make changes to interfaces over the years, it is important to document these changes for change control purposes. Having a web-based documentation portal makes it easy to do this as it is the only version of the document that exists, and it is always updated with the latest changes.

Reason 4 – Application upgrades. As we upgrade our large enterprise applications, we always must study the impact of the upgrade on any custom interfaces that we have. The only way to do this quickly is to review the documents and specifically focus on any sections focusing on dependencies, and general data flow. Having proper documentation during an upgrade process can make the difference between a 1 month upgrade and a 6 month upgrade.

Reason 5 – Training and new hires. We pride ourselves in our one-to-one client to resource ratio in our managed service group. That means that for every new managed service customer, we add at least one new member to our team. This new team member goes through a rigorous training that includes reviewing all existing client documentation. This is an indispensable tool for our team as well as any client newhires.

If you need help creating your interface documentation, or to subscribe to our DOCR documentation portal, please contact us here.

Best Practice to Load File into IPA

The Data Iterator node is commonly used to loop through records but it can also read a file into IPA. (For more on the Data Iterator Node, visit: https://www.nogalis.com/2017/05/04/ip-designer-series-the-data-iterator-node/)

Based on the responses of seasoned IPA developers on the Infor/Lawson forums, the best way to ‘load’ in a file to an IPA flow is to use a FileAccess node followed by a DataIterator node. This speeds up the flow considerably as the FileAccess node would read the file into memory and then the DataIterator node can use the data from the memory instead of reading and closing the file multiple times.

First ‘load’ or ‘read’ the file into IPA using the FileAccess node. Then set DataIterator to process the Data (and not File) and set the source to be the FileAccess_outputData. This should noticeably improve the performance of the flow as the data is just being loaded into memory once by the FileAccess.

 

IPA – LSF Server Configuration Recommendations from Infor

Infor Process Automation should be configured correctly to ensure proper functioning of other Lawson System Applications. Here are the official best practice IPA-LSF Server Configuration recommendations directly from Infor. (KB 1946828)

Recommended Configurations

  1. JT-973173

    This JT resolves a memory leak issue in the Event Manager Java Process. Not having this JT means the Event Manager Java Process will slowly grow in size and if left unchecked, can consume all RAM and even crash the LSF Server.

  2. Remove lpsMaxHeap=XXXXX and lpsMinHeap=XXXXX from LAWDIR/system/bpm.properties

    These settings are only required when using JNI.

  3. Set useLPSBridgeSocket=true in LAWDIR/system/bpm.properties

    NOTE: The use of the LPS Bridge Socket connection means LSF batch/online programs will no longer initialize a JVM, it will simply make a socket connection to the Event Manager process to make the request.

  4. Set Windows pagefile on LSF server to 32 GB

 

Additional Recommendations for Infor Cloud Clients

  • Verify and ensure that lpsHost=inforbclm01.inforbc.com
  • NOTE: If this setting is not pointed at the internal domain, a grid session memory leak can occur in Event Manager on the LSF server
  • NOTE: Changes to this file should be made by executing pfserv config lps and they require a restart of LSF Process Flow and LSF Web Application Servers.

Additional Recommendations for LSF on LINUX

  • Ensure LSF JT-875069 is applied to the LSF system
  • Add “useLPSLocalServices=true” to LAWDIR/system/bpm.properties
  • Follow KB 1936921 which has two process definitions files used for synchronizing services from IPA/Landmark to LSF
  • In the GEN data area of the Landmark Rich Client, navigate to ConfigurationParameter BusinessClass and add: Component=ipa, Name=useRMIWebServlet, Value=true
  • Configure LSF to look at IPA Services in the LOGAN database instead of connecting to IPA. This requires LSF JT-875069 which allows you to add “useLPSLocalServices=true” in bpm.properties. This also requires the use of a ServiceSyncFlow to move the services from IPA to LSF. To implement this procedure, please follow instructions on KB 1936921.

Setting Up LSF Java User Permissions for IPA

When working with Infor Process Automation (IPA), code or programs can be executed remotely on the Lawson System Foundation Server through these four nodes:

  1. System Command Node
  2. File Access Node
  3. Resource Query Node
  4. Resource Update Node

These nodes work by making a connection (via RMI call) to a java.exe process running on the Lawson System Foundation Server. Therefore, it is vital that the process owner has the proper access to run these commands.

Follow the instructions below to configure your LSF system so these processes will be owned by a user that has the necessary access:

  1. Create two files (pfrmi.cfg and pfem.cfg) in %LAWDIR%/system directory. The next time the process flow is restarted, the java.exe process will refer to these files to specify which user will start the java.
  2. Both files should be identical and have just two lines each:
    line 1: LAWSONUID DOMAIN\accountname
    line 2: 

LAWSONUID DOMAIN should be replaced with your own domain and accountname should be replaced with your own account name. This is the user you are designating to run the java command. This user needs to have the proper access to run those commands. This domain/accountname combo needs to be a valid user defined in the LSF Environment Service Identity.

The second line needs to be a blank line. (Only if LSF system is running on Windows. No blank second line needed for UNIX)

line 1/line 2 are there to show you the line numbers. The actual words “line 1” and “line 2” should not be in the files.

Top Takeaways from “Process Automation – Performance Considerations and Best Practices” Infor Webinar

This 1 hour, interactive briefing from Infor provides an overview of the best practices for performance tuning your IPA process flows, grid tuning considerations, and tips to leverage the most from your system.

The full webinar can be accessed through a link on this page: (https://technology-blog.infor.com/2015/12/13/recording-now-available-for-ipa-performance-considerations-and-best-practices-webinar/)

Here are some notable takeaways from the video:

  1. IPA Settings Tuning
    Basic tips for improving performance:

    • Ensure you are on the latest release of Landmark Environment.
    • Upgrading your system to a newer release of Java will often result in performance improvements.
    • Core Pool Size: pfi.dispatcher.CorePoolSize setting within Grid determines how many simultaneous workunits can process at a time. Rule of thumb is to start by setting this value to the number of CPU Cores in your Landmark Server. (8 core CPU = 8 max workunits) In order to fully optimize, you can start simultaneous flows equal to your server’s cores and check Task Manager’s Performance tab. For example, if only 50% of your CPU is being used with 8 simultaneous flows in progress, you can probably safetly increase the maximum workunits to 12 or even 16 for improved performance.
    • Max Heap: grid.jvm.maxHeapMB setting within Grid sets the max amount of server RAM a particular Grid Node can use for processing. When setting this value, be aware of how much total RAM is available to use on the server. All the Max Heap sizes for all Grid Nodes, Operating System Memory, and Memory Footprint of other programs needs to be less than the total RAM of the server. The Max Heap should be set as high as possible with these considerations in mind.
    • LmrkDeferred Node: When installing Landmark, some grid deployments create this LmrkDeferred grid gode which combines the functionality of Async and IPA into one node. You should not be running Process Automation with this grid node. Infor recommends that the Async node and IPA node be broken out into their own grid nodes.
    • Configure the system so that there is only one IPA Grid node per Data Area that processes workunits.
  2. Process Tuning
    Basic tips for improving processes/flows:

    • Most important factor is the number of nodes. Try to minimize the number of nodes in your flow as flows take approximately 5-20ms time between nodes and 2-5ms time per variable assignment. An important tip when trying to reduce nodes is to remember that values returned from a query or processing node will automatically be assigned an internal variable name that we can refer to. There is no need for an additional assignment.
    • When using a query to cycle through records and write to a file, using a MsgBuilder versus a FileAccess or Assign node is more efficient as it is keeping the records in memory to write all at once at a later time. FileAccess in between a query is the most inefficient as it requires opening and appending onto a file once for every record encountered.
    • Turning on logging will decrease flow performance so it should only be turned on when troubleshooting flow failure or performance issues. When a flow is failing, turn on Workunit and Activity Logging and turn this off when done troubleshooting. For performance issues, run the flow with Workunit only logging turned on.
    • When creating a large csv file, consider using SysCommand node instead of writing to a file by looping through records.

For more details and the most recently updated KB articles, refer/subscribe to:
KB 1671693 – IPA Support Best Practices

Infor Process Automation (IPA) Best Practices and Performance Improvement Tips

Since Infor Process Designer is an open-ended visual design tool, different users can achieve the same end goal but in many different ways. While the flow might technically “work”, this level of design freedom usually leads to processes that are not as efficient as they could potentially be.

Here are some tips to keep in mind as you design your next flow:

  1. Use a MsgBuilder node instead of writing to file for each record

     

    By using a MsgBuilder node, we can append all found records to a String in memory. We can then call the string when we need to write the records. This is much faster than individually writing each line, each time through the FileAccess node.

  2. Merge Assign Nodes

    This is a common mistake in many processes. There is no reason two Assign nodes would have to line up one after another in a flow. You can simply use one Assign node for all your variables/javascript. More nodes in your flow results in slower speeds so you should always try to use as few nodes as possible.
  3. Remove Unnecessary Assign Nodes

    When a value is returned from a query or processing node, it is automatically assigned an internal variable name.

    In the screenshot above, we see the values pulled from my SQL query have already been automatically assigned a variable. Therefore, there would be no point in having an Assign node to set SQL ADDRESS to my custom variable <!ADDRESS>. It would be better to simply call <!SQLQuery1080_ADDRESS> when needed as the variable has already been created for me.

  4. Remote File Access
    When the Infor Lawson business applications and ProcessFlow are on the same server, file access is blazing fast since all the files are local. However, when IPA is on a separate server, the process slows down since the flow must now access the file across the network and not locally.

    To mitigate this issue, make sure file access is done as efficiently as possible. Perhaps reach out to those in charge of network IT to see about reducing network lag.

  5. Upload the Process with Logging Off
    Process logging can negatively affect performance. Unless you are troubleshooting a problem, processes should be uploaded with Log Level: No debug

IP Designer Email Node – Using HTML and Inserting Images

Using HTML or inserting images via the IP Designer Email node seems to be a common problem based on the number of forum posts on this topic.

Here are some of the solutions that have been proposed.

USING HTML

Based on the responses of senior IP Designer users, the gold standard for using HTML in your email is to write the HTML directly into a MsgBuilder node (http://www.nogalis.com/2017/09/12/ip-designer-series-message-builder-node/) so that we can call the MsgBuilder variable name in the email body of the Email node.

INSERTING IMAGES

  1. A simple way to insert images in emails sent by IP Designer Email node would be to compose HTML as shown above and bring in images from web servers.
  2. Another way would be to store the image in the Lawson emailattachments directory in order to attach it in the Email node.In Windows the directory to store the images is: lawsondirectory/bpm/emailattachments
    In Unix: lawsondirectory/LPS/emailattachments/