Running BS531 creates two separate CSV files, one for employee benefits (BNBATCH) and one for dependent benefits (BNDEPBATCH).

Copy these files over to the D:\lsfprod\law\prod10\work\BNBATCH directory on your LSF application server (your directory names may differ slightly).

After creating your BN531 job, go in LID and use the jobdef command to locate it under the user you created it under.

Once located, press F6 and select “C. CSV File Attributes and verify”

Verify the File Name is BNBATCH and type is CSV:

Now run the BN531 job in report mode to verify there are no errors.

Run BN531 in update mode.

 

Once done, go back to the D:\lsfprod\law\prod10\work\BNBATCH directory and rename the BNBATCH to bnbatch_emp. Now rename bndepbatch file to BNBATCH.  Run the BN531 again in report. Once verified there are no errors, run the BN531 again and confirm the data carried over.

Hope this was helpful.

 

Can’t Change Pricing on a Requisition or a PO?

There are settings in the Buyer (PO04) or Requestor (RQ04) setup.  It allows control over defaulting in pricing from any contracts or vendor agreements that you might have in place.  If you want a buyer or a requestor to be able to change the price on a PO or a Requisition, then consider the following changes:

 

PO04:

At the bottom of the Main tab in PO04 – you can setup a buyer to allow unit cost overrides from different costs.  In this case, the user can’t override a cost that defaults in from a vendor agreement or Strategic Sourcing (Contracts).  However they can override the cost on a PO line item if it defaults in from the last PO or Last Cost (Invoice cost or adjusted cost).

 

RQ04:

At the bottom of the Main tab in RQ04 – you have similar options.  This user does not have the ability to override any cost that might default onto a requisition.

Creating a new interface? Doing an Upgrade? Don’t forget the Business Requirements Document (BRD)

Often we go through the process of meeting and discussing different aspects of a project.  Those meetings are essential to understanding what is needed to be done.  A good BRD is basically a bible for all to refer back to during and even after the process to make sure that everything has been done at the level that was decided upon.

It is also a good way to make sure that everyone goes into the project with the same understanding. It is important to keep the BRD up to date with any changes that are decided upon.  This limits overlooking small aspects of a project that might have been referred to in – when did that email get sent again?

Based on client needs, we put together a BRD that reiterates what we think the client said.  When you review and approve it, it means we are both are working under the same premise.

It helps to eliminate those famous cartoon moments:

Considering a Divestiture?  Here are things to consider:

History: How much data do you need to extract to send with the divested division/company?

Consider each module that currently has data and determine if there is a similar module in the new system. If there isn’t a similar module in the new system, then how will the data be accessed? Possibly a separate database that stores the information for lookups as needed.

Fixed Assets: Do you have assets that need to be moved with the divested division/company?

What are the criteria for which assets need to be moved with the divested division/company?

What date will these assets be last depreciated in your current system and how will the depreciation continue in the new system?

Will you keep the original in-service date and depreciation history and continue depreciating the same in your new system?

AP Payments:  How will you handle bank reconciliations and payment voids that occur after the division/company is divested?

Will there be on-going communication for this information to be sent?

Will there be a complete pay-off in the current system so that AP starts clean in the new system with no open invoices?

Do you have a consolidated AP process now that needs to be split into multiple AP processes? If so, when should you start this process of separation?  Especially important if you have any EDI or other interfaces to AP that will need separation as well.

Vendors: How will you notify Vendors that invoices after a certain date should be handled differently? Are your vendors even aware that there are two different divisions or companies that they are currently billing? It is very important to start working with your vendors and any 3rd party AP interfaces sooner than later.

Purchases: How will any open POs and/or requisitions be handled?

Make sure that you have enough inventory on hand for the divested division/company prior to cutting off standard replenishment.

 

 

Problem: Workunits for Infor Process Automation are sitting in Ready Status and not processing.

Solution:
1.  Via LmrkGrid (Grid Management), determine how many workunits may simultaneously process in your system. (see attached “DeterminingSimultaneous.docx”)

  1. Determine how many workunits are currently in a processing state.

NOTE:  Please refer to attachment “DeterminingSimultaneous.docx” for instructions on determining simultaneous processes.  In case you will need to engage support, you should screenshot this information to provide when you open the support incident.

 

– Command to count records in Ready Status:  dbcount <DATAAREA> -f Status=\”1\” PfiWorkunit

– Command to count records in Processing Status:  dbcount <DATAAREA> -f Status=\”2\” PfiWorkunit

– Command to count records in Completed Status:  dbcount <DATAAREA> -f Status=\”4\” PfiWorkunit

 

It is a good idea to monitor and take counts of these records periodically. Are the number of workunits in Ready status growing? Are the number of workunits in Completed status growing? Is the number of workunits in Processing status equal to the maximum number workunits that can simultaneously process?

NOTE: If the number of workunits in Ready Status is growing and the number of workunits in completed status is not, then either:

  1. You have workunits that processing for a very long time holding up the system; use the Grid Management UI to determine which workunits are processing so long and determine if those are stuck in a loop; or if they are just processing normally large jobs. Consider cancelling the long running workunits, and scheduling them to run in off business hours.
  2. If you are on Landmark 10.1.0.x, there was a bug in this version of Landmark that periodically caused Async to stop picking up new workunits. This issue was resolved by a re-write of Asnyc and LPA nodes in 10.1.1.x Landmark versions. If you are on Landmark 10.1.0.x you should restart the Async Node, and the IPA node.

NOTE: The workunits that were already queued to an LPA node will not automatically start back up;  the workunit polling frequency (default 30 minutes) will need to trigger before they are requeued to a new LPA node.

If you are acquiring a new hospital in the near future, below are 5 things to consider:

  1. Will adding the new hospital cause a difference in your GL structure?
    • Are there different management hierarchies that need to be addressed in the new departments/cost centers?
    • If you are using automated flows for approvals will the current structure work for the new location as well?
  2. Localized vs. Centralized purchasing and AP
    • It is often thought that just centralizing purchasing and AP is the norm when acquiring a new hospital. This is often more difficult to achieve then it seems on paper.
    • There are item number differences – how will those be handled? Is there some normalized data in the item master that will allow for consolidating the item master? Does the new hospital provide specialty services that requires adding many more items into the item master? Who will maintain the new items?
    • There are long term contracts in place at both locations – often different prices and terms – Will these stay separate or renegotiated?
      • Different locations often have different sales reps for the same suppliers. How will these relationships be affected? And what affect will it have on the local service if the local rep is no longer involved?
  3. What kind of reports are used at the new location that are not used currently in-house? Are the reports necessary? If these are not canned reports- how will the reports be created and maintained?
  4. What amount of history is necessary to bring over to the new system for each module that is being transferred to your ERP? How will older data be accessed when needed?  Will the old data be available for research?

You can schedule your ProcessFlow to run on a weekly schedule by using the ProcessFlow scheduler. Recently, a client had asked if there was a way to schedule it bi-weekly. Great news – there is a way, and it’s easy to set up.

In Rich Client, Start -> Applications -> My Actions -> My Scheduled Actions -> Double click action you wish to modify

In this next screen, scroll down to Scheduling Details and click on Week Number

In order to schedule a process to run biweekly, you would want to use Even Weeks in the “Select Week Number” field and specify the day you want it to run (in this case, Monday)

Now you can have your ProcessFlow running every other week.

 

For a step-by-step guide to Scheduling a Process in IPA, check out this related article: Scheduling a Process in IPA

Depending on the process run, Rich Client workunit logs can grow extremely large. So large in fact that you may not be able to extract the full log from the Landmark Rich Client.

If a workunit log grows too large you may not be able to extract the full log from the Landmark Rich Client,  in this case you can use the following command to extract the log from a Landmark Command Prompt.

What Is the Landmark Command Prompt?
The Landmark administrator will perform many tasks from a command prompt. When you are instructed to use a Landmark command prompt, you should be sure you are in the Landmark Environment that you want to use, and that all environment variables are set correctly.

Setting Environment Variables
Before you startBefore you perform this procedure, be sure that the /etc/lawson/environment/environmentName/config.sh file contains the appropriate settings for environment variables.

 

Use this procedure to export the appropriate environment variables for your Landmark Environment before issuing commands from a Landmark command prompt.

To set the Landmark Environment variables

At a command prompt, type

. cv landmark-env-name

Where landmark-env-name is the name of the Landmark Environment.

Resolution:

dbexport -C -f “PfiWorkunit=####” -o . -n <dataarea> PfiWorkunit

Example:

dbexport -C -f “PfiWorkunit=7000” -o . -n prod PfiWorkunit

Is documenting your interfaces challenging? Do you find yourself confused as to what you should focus on and how to best format things? Do you feel that the work you’re creating is not going to get used and you are not sure where to store it all? Join our webinar to get answers to all those questions and learn a new way to create accurate, valuable and well written documentation for your team. We’re going to cover a host of topics including:

  • How documentation of your interfaces will impact things
  • What you should include in your interface documentation
  • How to effectively create a document that has value to the organization
  • How to store your documents for best value, retention, correctness, and accountability
  • Q/A

 

Infor KB article 1667935 offers a list of sample flows and scripts used to carry out certain needs in Lawson.

Download Sample Flows Scripts

Preamble

During the course of providing support for Infor Process Automation, I periodically find the need to create a process flow or short script to satisfy some need that I have. This KB article is a place you can look for some example flows and scripts. The content here is built as examples of techniques and are not intended for executing within a Production environment without modification.

FP.pl – (perl script)

This perl script accepts a file as input, such as a workunit.log that is too large to open in a text editor, and breaks it into smaller pieces. Execute from a command window:

C:\>perl FP.pl -f C:\folder\filename.ext -n 100000

 

This would create new files C:\folder\filename1.ext, C:\folder\filename2.ext each with 100,000 lines.

 

LogReporter.pl – (perl script)

This perl script is used for determining where a process flow is spending all of it’s time processing. It is packaged in a zip file, when unzipping this the LogReporter.pl should be inside of a folder that contains a /lib folder which contains a required perl module. To run the perl script save off a workunit log that was captured at “Workunit Only” log level. Then from a command window, cd to the directory where LogReporter.pl exists and execute:

perl LogReporter.pl -f C:\folder\workunit.log

 

This will create two csv files, one being a report of where the flow spent all of it’s time.

 

IPABP-FileAccess-IntermittentWrite.lpd

This is a sample process flow which provides three examples that achieve the same result. The first technique stores all data in a message builder node, then writes to a file once. This technique is only possible if the process flow does not stop at a user action node or wait node, which would wipe current variable values. The second example makes a write to append the file once per record returned. I recommend avoiding this strategy as there is overhead to opening a file, writing, and closing the file repeatedly. The third example shows how to store data for X records, then periodically writing the data to a file. This is the recommended technique as it minimizes the memory footprint stored in memory, and eliminates the need to write to the file for every record.

 

IPABP-CSVFile-SQLCMD.lpd

This sample flow makes use of SQL Server’s Database utility to obtain a CSV file directly from the database. A technique like this can save significant resources and time to build a csv file from within a flow. The command we run is:

SQLCMD -S . -d DATABASENAME -U username -P password -s, -W -Q “SELECT PFIWORKUNIT, FLOWDEFINITION, FLOWVERSION, WORKTITLE, PFIFILTERKEY, FILTERVALUE FROM [DATABASENAME].[SCHEMA].[PFIWORKUNIT]” | findstr /V /C:”-” /B >

Microsoft sqlcmd utility documentation: https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-2017

TIP: Adding -E, instead of -U -P, will cause the command to use a trusted connection and pass windows credentials to the database. Running this command from within a flow means the command is actually going to execute as the user that Landmark is running as. It will pass those credentials to the Database, if you add a Windows Authentication login for that user you won’t have to pass the user and password in plain text from a process flow.

 

TIP: The -h flag can be used to eliminate header row (-h -1) would be no headers. (-h 100) would print the headers every 100 rows of data.

 

IPABP-WorkunitGovernor

Periodically we see a flow that is deemed critical to the business, and there are also a few less important process flows that can take alot of time to run. In this scenario it is possible to trigger several less important flows that will occupy all available workunit threads. This causes important flows to wait for an available thread to process on. This is a sample of a technique that you could use to limit the number of less important flows that can run at the same time.

 

The flow is a simple Landmark transaction to count how many workunits with the same name are currently processing. If that number is higher than the value we set, the flow will wait for a few minutes then get another count. We are leveraging the wait node, which parks the workunit and makes its workunit thread available to other flows.

 

IPABP-JobSubmit and IPABP-JobWait

NOTE: Due to a bug in Landmark Runtime Environment, you must be above Landmark 10.1.1.10 for this flow to work

 

Lawson System Foundation deprecated the old cgi-lawson/jobrun.exe command in LSF10, and replaced all these cgi executables with lawson-ios/action servlets. I developed this example flow as a way to submit a job and have process automation wait for the job to finish before moving on. The benefit of this technique is it does not consume a workunit thread while the job is executing in LSF.

 

It works like this: You have a main flow (IPABP-JobSubmit) which uses a trigger node to launch IPABP-JobWait. When it triggers the JobWait flow it passes all the information that is required to execute the job, then JobSubmit immediately goes to a User Action node which drops it from processing and returns the workunit thread. JobWait submits the job then cycles through a WAIT node, periodically checking on the status of the job. When the job completes, JobWait will take an action on the Job Submit which wakes that flow up to continue processing.

 

IPABP-Multithread-S3Query

Before we found the performance improvements of using the Java Interpreter available in Landmark 10.1.1.48 there were times we needed to have some large running flows execute faster. This flow is an example where a S3 Query returned alot of records and we needed to perform another action for every single record that was returned. To improve the processing time we broke this up so that the S3 Query was executed simultaneously in multiple workunits, instead of a single workunit.

 

This sample collects a set of KEYS for the table we are querying inside LSF; then triggers a new workunit to handle that group of records.

 

IPABP-BackDatedProxyUserApproval & IPABP-ResetWorkunitsForNewProxies

In Infor Process Automation 10.1.1.x we introduced some new features. One of the new features is the concept of creating a Proxy User; this allows a Process Flow Approver to create a proxy that grants another user access to fill in for them while they are away. When this feature was developed, to avoid problems with pre-existing code we could not safely grant the proxy user access to pre-existing work items.

 

So if a user is assigned 2 work items on Wednesday, then sets up a proxy for another user to cover for them. Even if you back date the proxy access to Tuesday, the new proxy user will not see the items created Wednesday. So I came up with the following concept:

  1. If I can design my approval flow with this limitation in mind, I can have an action on every User Action node that simply leaves the current UserAction node then comes right back to the same UserAction
  2. I can design another flow to find all workunits that are waiting on a useraction, and have that flow execute this special action which would cause those flows to leave the current UA and return

Why? Because this means after I create a proxy I can run my Proxy Reset flow to have a flow leave and return thus creating new Work Items which would include the new proxy user.

 

The method I am describing works, but it make not be for everyone. In this instance I am stealing the Timeout action so that regular users can not see my “Reset Action”, but if you already use a timeout addition modification to this concept would likely be required.

 

IPABP-LargeHTMLContentInVariable

There have been several times where I have encountered a client looking to simply build a HTML content display exec to show a header record, and all detail lines. Like REQHEADER and REQDETAIL, however since the database is limited to 2048 characters for a string variable. If the flow has multiple levels of approval, after the flow stops at the first variable, the perfectly built HTML is now truncated down to 2048.

 

NOTE: If you trigger your flow from a File Channel, the Input Data variable is already populated and this approach won’t work for you. Essentially this flow builds the HTML, then makes a Landmark Transaction to create a PfiWorkunitInputData record to store the HTML, then the flow uses a short term wait node. The purpose of the wait node is so that when the workunit restarts it loads PfiWorkunitInputData into the variable where it can now be used anywhere else in the flow.

 

IPABP-LogLevelAdjuster (Windows Landmark)

This process flow is designed to update all Process Flow Definitions,  it sets the Workunit log level to “Workunit Only,  it sets the CancelCheckFrequency to “Intermittent @ 30 seconds”, it disables CaptureActivityStartandEnd times.   For efficiency sake this is the recommended settings for running a flow inside of a Production environment unless you are specifically troubleshooting an issue with the flow.

 

NOTE:   This flow contains 2 variables in the start node that must be updated before implementing it.   The “BaseLmrkDir” variable needs to be pointed to the absolute path of the Landmark directory where the “enter.cmd” command resides.  You must also use \\ instead of \ as we are storing the directory in a javascript string.   For example:   D:\\lmrkprod  if there are two directories be sure to use double backslashes for all directories  D:\\Landmark\\prod.

 

Also, you should modify the email address to your own email address.

 

UNIX – You should be able to modify this flow fairly easy to work for UNIX as well.   You simply need to modify the system command node so that it can run a landmark command.   cd /landmarkdirectory && enter.sh && listdataareas.  Everything else is landmark based.