Depending on the process run, Rich Client workunit logs can grow extremely large. So large in fact that you may not be able to extract the full log from the Landmark Rich Client.

If a workunit log grows too large you may not be able to extract the full log from the Landmark Rich Client,  in this case you can use the following command to extract the log from a Landmark Command Prompt.

What Is the Landmark Command Prompt?
The Landmark administrator will perform many tasks from a command prompt. When you are instructed to use a Landmark command prompt, you should be sure you are in the Landmark Environment that you want to use, and that all environment variables are set correctly.

Setting Environment Variables
Before you startBefore you perform this procedure, be sure that the /etc/lawson/environment/environmentName/config.sh file contains the appropriate settings for environment variables.

 

Use this procedure to export the appropriate environment variables for your Landmark Environment before issuing commands from a Landmark command prompt.

To set the Landmark Environment variables

At a command prompt, type

. cv landmark-env-name

Where landmark-env-name is the name of the Landmark Environment.

Resolution:

dbexport -C -f “PfiWorkunit=####” -o . -n <dataarea> PfiWorkunit

Example:

dbexport -C -f “PfiWorkunit=7000” -o . -n prod PfiWorkunit

Is documenting your interfaces challenging? Do you find yourself confused as to what you should focus on and how to best format things? Do you feel that the work you’re creating is not going to get used and you are not sure where to store it all? Join our webinar to get answers to all those questions and learn a new way to create accurate, valuable and well written documentation for your team. We’re going to cover a host of topics including:

  • How documentation of your interfaces will impact things
  • What you should include in your interface documentation
  • How to effectively create a document that has value to the organization
  • How to store your documents for best value, retention, correctness, and accountability
  • Q/A

 

Infor KB article 1667935 offers a list of sample flows and scripts used to carry out certain needs in Lawson.

Download Sample Flows Scripts

Preamble

During the course of providing support for Infor Process Automation, I periodically find the need to create a process flow or short script to satisfy some need that I have. This KB article is a place you can look for some example flows and scripts. The content here is built as examples of techniques and are not intended for executing within a Production environment without modification.

FP.pl – (perl script)

This perl script accepts a file as input, such as a workunit.log that is too large to open in a text editor, and breaks it into smaller pieces. Execute from a command window:

C:\>perl FP.pl -f C:\folder\filename.ext -n 100000

 

This would create new files C:\folder\filename1.ext, C:\folder\filename2.ext each with 100,000 lines.

 

LogReporter.pl – (perl script)

This perl script is used for determining where a process flow is spending all of it’s time processing. It is packaged in a zip file, when unzipping this the LogReporter.pl should be inside of a folder that contains a /lib folder which contains a required perl module. To run the perl script save off a workunit log that was captured at “Workunit Only” log level. Then from a command window, cd to the directory where LogReporter.pl exists and execute:

perl LogReporter.pl -f C:\folder\workunit.log

 

This will create two csv files, one being a report of where the flow spent all of it’s time.

 

IPABP-FileAccess-IntermittentWrite.lpd

This is a sample process flow which provides three examples that achieve the same result. The first technique stores all data in a message builder node, then writes to a file once. This technique is only possible if the process flow does not stop at a user action node or wait node, which would wipe current variable values. The second example makes a write to append the file once per record returned. I recommend avoiding this strategy as there is overhead to opening a file, writing, and closing the file repeatedly. The third example shows how to store data for X records, then periodically writing the data to a file. This is the recommended technique as it minimizes the memory footprint stored in memory, and eliminates the need to write to the file for every record.

 

IPABP-CSVFile-SQLCMD.lpd

This sample flow makes use of SQL Server’s Database utility to obtain a CSV file directly from the database. A technique like this can save significant resources and time to build a csv file from within a flow. The command we run is:

SQLCMD -S . -d DATABASENAME -U username -P password -s, -W -Q “SELECT PFIWORKUNIT, FLOWDEFINITION, FLOWVERSION, WORKTITLE, PFIFILTERKEY, FILTERVALUE FROM [DATABASENAME].[SCHEMA].[PFIWORKUNIT]” | findstr /V /C:”-” /B >

Microsoft sqlcmd utility documentation: https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-2017

TIP: Adding -E, instead of -U -P, will cause the command to use a trusted connection and pass windows credentials to the database. Running this command from within a flow means the command is actually going to execute as the user that Landmark is running as. It will pass those credentials to the Database, if you add a Windows Authentication login for that user you won’t have to pass the user and password in plain text from a process flow.

 

TIP: The -h flag can be used to eliminate header row (-h -1) would be no headers. (-h 100) would print the headers every 100 rows of data.

 

IPABP-WorkunitGovernor

Periodically we see a flow that is deemed critical to the business, and there are also a few less important process flows that can take alot of time to run. In this scenario it is possible to trigger several less important flows that will occupy all available workunit threads. This causes important flows to wait for an available thread to process on. This is a sample of a technique that you could use to limit the number of less important flows that can run at the same time.

 

The flow is a simple Landmark transaction to count how many workunits with the same name are currently processing. If that number is higher than the value we set, the flow will wait for a few minutes then get another count. We are leveraging the wait node, which parks the workunit and makes its workunit thread available to other flows.

 

IPABP-JobSubmit and IPABP-JobWait

NOTE: Due to a bug in Landmark Runtime Environment, you must be above Landmark 10.1.1.10 for this flow to work

 

Lawson System Foundation deprecated the old cgi-lawson/jobrun.exe command in LSF10, and replaced all these cgi executables with lawson-ios/action servlets. I developed this example flow as a way to submit a job and have process automation wait for the job to finish before moving on. The benefit of this technique is it does not consume a workunit thread while the job is executing in LSF.

 

It works like this: You have a main flow (IPABP-JobSubmit) which uses a trigger node to launch IPABP-JobWait. When it triggers the JobWait flow it passes all the information that is required to execute the job, then JobSubmit immediately goes to a User Action node which drops it from processing and returns the workunit thread. JobWait submits the job then cycles through a WAIT node, periodically checking on the status of the job. When the job completes, JobWait will take an action on the Job Submit which wakes that flow up to continue processing.

 

IPABP-Multithread-S3Query

Before we found the performance improvements of using the Java Interpreter available in Landmark 10.1.1.48 there were times we needed to have some large running flows execute faster. This flow is an example where a S3 Query returned alot of records and we needed to perform another action for every single record that was returned. To improve the processing time we broke this up so that the S3 Query was executed simultaneously in multiple workunits, instead of a single workunit.

 

This sample collects a set of KEYS for the table we are querying inside LSF; then triggers a new workunit to handle that group of records.

 

IPABP-BackDatedProxyUserApproval & IPABP-ResetWorkunitsForNewProxies

In Infor Process Automation 10.1.1.x we introduced some new features. One of the new features is the concept of creating a Proxy User; this allows a Process Flow Approver to create a proxy that grants another user access to fill in for them while they are away. When this feature was developed, to avoid problems with pre-existing code we could not safely grant the proxy user access to pre-existing work items.

 

So if a user is assigned 2 work items on Wednesday, then sets up a proxy for another user to cover for them. Even if you back date the proxy access to Tuesday, the new proxy user will not see the items created Wednesday. So I came up with the following concept:

  1. If I can design my approval flow with this limitation in mind, I can have an action on every User Action node that simply leaves the current UserAction node then comes right back to the same UserAction
  2. I can design another flow to find all workunits that are waiting on a useraction, and have that flow execute this special action which would cause those flows to leave the current UA and return

Why? Because this means after I create a proxy I can run my Proxy Reset flow to have a flow leave and return thus creating new Work Items which would include the new proxy user.

 

The method I am describing works, but it make not be for everyone. In this instance I am stealing the Timeout action so that regular users can not see my “Reset Action”, but if you already use a timeout addition modification to this concept would likely be required.

 

IPABP-LargeHTMLContentInVariable

There have been several times where I have encountered a client looking to simply build a HTML content display exec to show a header record, and all detail lines. Like REQHEADER and REQDETAIL, however since the database is limited to 2048 characters for a string variable. If the flow has multiple levels of approval, after the flow stops at the first variable, the perfectly built HTML is now truncated down to 2048.

 

NOTE: If you trigger your flow from a File Channel, the Input Data variable is already populated and this approach won’t work for you. Essentially this flow builds the HTML, then makes a Landmark Transaction to create a PfiWorkunitInputData record to store the HTML, then the flow uses a short term wait node. The purpose of the wait node is so that when the workunit restarts it loads PfiWorkunitInputData into the variable where it can now be used anywhere else in the flow.

 

IPABP-LogLevelAdjuster (Windows Landmark)

This process flow is designed to update all Process Flow Definitions,  it sets the Workunit log level to “Workunit Only,  it sets the CancelCheckFrequency to “Intermittent @ 30 seconds”, it disables CaptureActivityStartandEnd times.   For efficiency sake this is the recommended settings for running a flow inside of a Production environment unless you are specifically troubleshooting an issue with the flow.

 

NOTE:   This flow contains 2 variables in the start node that must be updated before implementing it.   The “BaseLmrkDir” variable needs to be pointed to the absolute path of the Landmark directory where the “enter.cmd” command resides.  You must also use \\ instead of \ as we are storing the directory in a javascript string.   For example:   D:\\lmrkprod  if there are two directories be sure to use double backslashes for all directories  D:\\Landmark\\prod.

 

Also, you should modify the email address to your own email address.

 

UNIX – You should be able to modify this flow fairly easy to work for UNIX as well.   You simply need to modify the system command node so that it can run a landmark command.   cd /landmarkdirectory && enter.sh && listdataareas.  Everything else is landmark based.

If you change the database server that hosts your LBI data, you will need to point your LBI instance to the new server.  This is done in WebSphere.  Log into your LBI WebSphere console, and navigate to Resources > JDBC > Data Sources.  Click on each data source that needs to be updated (LawsonFS, LawsonRS, LawsonSN).  Modify the server name, click OK and Save.

If the user credentials are different for this new data source, from the data source screen go to JAAS – J2C authentication data and update the credentials there.

Save the configuration changes and synchronize the nodes (if applicable).  Go back to the Data Sources screen and test each connection.

To update the database server that your Lawson instance is pointing to, you will need to modify the MICROSOFT (or ORACLE) files for each environment that you are updating.  These files can be found at %LAWDIR%/DATAAREA/MICROSOFT.  Simply change the server name for the DBSERVER value and bounce your Lawson services.

NOTE: This article assumes that your new database server utilizes the same credentials/authorization as the original database server.

 

 

 

To point your Landmark instance to a new database server, you need to update the db.cfg file for each environment.  These files can be found at %RUNDIR%/DATAAREA/db.cfg.  Make sure you update the data source for each data area, including GEN.  Bounce the Landmark services, or reboot your server, and you are done!

 

 

Here at Nogalis we perform managed service for several dozen enterprise customers. Most of our customers are either using Lawson V10 on premise or Infor CloudSuite products in the cloud. Our customers vary in their level of complexity but they almost all have several custom interfaces that support the operation of their businesses. Most of these interfaces are built with IPA (Infor Process Automation) or with ION which are both Infor supported products. Here are some examples just to name a few:

  • Positive pay interfaces to banks
  • Invoice import interface
  • Vendor creation interface
  • Automated user provisioning
  • Employee benefits exports and imports
  • COBRA interface
  • Batch job automation
  • Invoice or Purchase Order Approval interfaces
  • Journal entry imports

And many more

Many of these interfaces were designed and developed years ago and have been modified several times since. Unfortunately, the same cannot be said about the documentation that was once made for them if any. In an upcoming webinar and subsequent article, I plan to discuss how to develop accurate, useful, and easy to maintain documentation. But in this article, I want to focus on the reasons why we need these documents because without knowing why we do something, we’re not likely to do it right. The reasons below serve as guidelines for our documentation:

Reason 1 – Supporting the interfaces. This is the primary reason for creating good documentation. The goal of the any documentation should be that anyone can read it from start to finish and be able to support the existing process when it breaks. Therefore, one of the first things we do for our new managed service customers is to create detailed documentation of all their interfaces on our DOCR documentation portal and give them access to it. You can see some examples here. What’s nice about storing these documents on a web portal is that there is a central place for keeping them updated that everyone can contribute to. For support reasons, we make sure that we have a troubleshooting and a recovery section in each of our interface documents.

Reason 2 – Updates and enhancements. We’re routinely asked to update and enhance existing interfaces. While we can dig through the entire code of the interface to find out what every piece does, it is always helpful if some documentation exists that has an overview of the different components of the interface.

Reason 3 – Change Control. As we make changes to interfaces over the years, it is important to document these changes for change control purposes. Having a web-based documentation portal makes it easy to do this as it is the only version of the document that exists, and it is always updated with the latest changes.

Reason 4 – Application upgrades. As we upgrade our large enterprise applications, we always must study the impact of the upgrade on any custom interfaces that we have. The only way to do this quickly is to review the documents and specifically focus on any sections focusing on dependencies, and general data flow. Having proper documentation during an upgrade process can make the difference between a 1 month upgrade and a 6 month upgrade.

Reason 5 – Training and new hires. We pride ourselves in our one-to-one client to resource ratio in our managed service group. That means that for every new managed service customer, we add at least one new member to our team. This new team member goes through a rigorous training that includes reviewing all existing client documentation. This is an indispensable tool for our team as well as any client newhires.

If you need help creating your interface documentation, or to subscribe to our DOCR documentation portal, please contact us here.

Whether you are refreshing your test LBI environment or moving all your data to a new database server, you may eventually need to migrate your report data for LBI. This is a relatively simple process, provided the LBI instances using the data are the exact same version and service pack level.

First, back up your LBI databases on the source server and restore to the destination server (LawsonFS, LawsonRS, LawsonSN).

If you are migrating data for one LBI instance, you just need to point your WebSphere data sources to the destination server.

If you are migrating data for a new LBI instance, or for your test environment, you’ll need to update all the services and references to the old LBI instance.  In the LawsonFS database, ENPENTRYATTR table, you’ll need to search the ATTRSTRINGVALUE column for your old server name, and replace it with the new server name.  For example,

UPDATE ENPENTRYATTR

SET ATTRSTRINGVALUE = REPLACE(ATTRSTRINGVALUE, ‘source-server’, ‘destination-server’)

WHERE ATTRSTRINGVALUE LIKE ‘%source-server%’

 

After you update those strings, you will need to redo your EFS and ERS install validators to set the correct URL.

  • http(s)://lbiserver.comany.com:port/efs/installvalidator.jsp
  • http(s)://lbiserver.comany.com:port/ers/installvalidator.jsp
  • http(s)://lbiserver.comany.com:port/lsn/admin/installvalidator.jsp

Next, log into LBI and go to Tools > Services.  Click on every service definition to look for the source server name, and update with the destination server name.

Make sure your data sources are pointing to the proper ODBC DSNs, and/or add new ODBC connections.  Test and verify all your reports.

If you’re reading this, you must have already become curious about serverless computing or perhaps you just read our other article on the topic. There is no doubt that the future of computing will not have any traditional servers in it. In fact, I would venture to guess that by 2025, nearly every new development project will be a serverless project. This is not very hard to believe given that we at Nogalis have been developing all our enterprise applications for the past year using this paradigm and have yet to come up with a real reason to spin up a server.

There are a few things you need to understand about “Serverless” before you can start your project.

  1. Firstly, your current server-bound applications cannot run serverless without going back to the drawing board. In an ideal Serverless application, each request needs to stand on its own. Each request needs to process within a single function, verify itself and its caller’s authentication and permissions, run to completion, and set the states as needed. Your current application doesn’t do that. Your server is likely holding on to user sessions that have been authenticated and running so that it can handle the user’s next request. Without getting any more technical, the important things to understand is a “Serverless” application has to be designed and developed that way, it cannot be ported over magically. At least not today in 2019.
  2. Most, if not all enterprise applications deal with a database. AWS has made some great strides in the area of “Serverless” databases and at the current time (2019) offers two database options that are serverless:
    • Amazon DynamoDB – Amazon’s own NoSQL database service
    • Amazon Aurora Serverless – an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs.

It is obvious that if Microsoft, Oracle, and IBM want to remain competitive in the DB space, they will also have to offer Serverless versions of their database products or go the way of the mainframe. In the meantime, you can build a serverless application, that connects to a server-bound database in order to store its data. We won’t judge you.

  1. In a serverless environment, you don’t have any local storage. Because you don’t have a, ummm, server. So, you will need to figure out your storage as a part of your design. No more writing log file to d:\temp. This is one of the biggest reasons your current server-bound application can’t just go without a server. Luckily, AWS offers several API-enabled storage solutions to deal with this limitation. Our favorite is still S3 because of its ubiquitous edge availability, it’s incredible speed, it’s flexible use cases, and its many other advanced features.
  2. Last on my list is the method for accessing your compute logic. For this AWS provides their API Gateway which can trigger any of your functions with ease.

Of course, the items above are not the entire story. For instance, AWS SNS is a fully managed messaging service that you can use for messaging. Cloudwatch will help you with event monitoring and debugging. A whole host of applications exist that can help with nearly everything else. In the past two years of developing serverless applications, we have yet to have a need that we couldn’t eventually resolve with AWS available services. I say eventually because our developer brains were accustomed to think of every problem in a very server-centric way. Once you stop relying on the server to solve all your development challenges, you’ll think of some innovative ways to develop your applications that will surprise you.

If you’re considering developing a serverless enterprise application, we’d love to help. Please use the contact us page to make an appointment with someone on our team to discuss your serverless project.

Until recent years, we treated our servers like pets. We gave them names and assigned a high value to their health and uptime. If a server went down, we did everything in our power to get it back up and running. With the advent of virtualization, the term server became more synonymous with VM (Virtual Machine) and the fact that it was running didn’t really have as much significance simply because we could spin up many more like it within minutes. But that was still a problem. We had to spin up many more just like it. We had gone from treating servers like pets, to treating them more like cattle. It appears the dream of virtualization was realized as we didn’t need to worry about one specific server anymore. If web-007 failed, web-001 through web-006 were still around to handle the traffic and no one would even notice the difference while a new instance was generated. But even in this new virtual reality, the virtual environment (our cattle) had to be up and running all the time, feeding on energy and they needed attention.

This was a problem that didn’t seem to have a solution. It seemed that if you needed to compute a bit of logic, you would have to pass that request to a service that could process it for you and give you a result. So, the cattle were as efficient as we could get for a long while. But realistically, each bit of computing request is just a tiny little request. Surely, we don’t need an entire virtual farm always on standby to fulfill requests that are not even being called on all the time. What if we moved from the cattle model to the bacteria model? Imagine an infinite ocean of tiny little computing units (our bacteria) that would instantly rush to our call whenever we needed them. Now imagine if that ocean was shared by all our applications.

This is the serverless dream. In the cloud ecosystem, we call these container services and major cloud providers like Microsoft Azure and Amazon Web Services offer these container services in an all you can eat ocean of compute that you can tap into whenever and however much you desire. Imagine never having to scale your server infrastructure. Imagine only getting charged for the infinitesimally small amounts of time that the CPU is processing your request. That is what services like AWS Lambda provide. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. You simply write all the logic of your code within a Lambda function, link the function to an application gateway and you’re ready to invoke your logic from anywhere, on any device, at any time, and as many times as you like. It is easy to see that with this new model of computing, your data center will soon become a thing of the past, and with it the skillset you have developed around the data center model.

So, what will it take to go fully serverless? Can you run your applications on this new computing platform? How do you develop applications in a way that can take advantage of these new technologies? Subscribe to our newsletter to be notified of our upcoming articles that will address all these questions. If you have a serverless development project that you would like to talk to us about, you can contact us directly here. Nogalis has been developing serverless applications using AWS Lambda since 2017 and we’d love to discuss your upcoming projects.