Procedure to Copy Lawson Portal Bookmarks from/to Different Servers or Environments
Bookmark data is stored in three places:
-Database tables in the LOGAN product line (listed below)
-LAWDIR/persistdata/lawson/portal/data/users/<user>.xml files
-LAWDIR/persistdata/lawson/portal/data/roles/<role>.xml files
Reminder that the <user>.xml files in LAWDIR/persistdata/lawson/portal/data/users contain references (bookmark subscriptions and/or locks) to the bookmark IDs in the original bookmark data. You must EITHER delete the “to” environment or copy from the “from” environment to the “to” environment, before you begin. Skipping this step will lead to orphaned references within the .xml files, errors in the Preferences > Content screen.
Also, the Portal role files in the “Manage Roles” screen under Portal Administration in Portal/Ming.le must EITHER have all bookmark locks removed in the “to” environment, or the files should be copied from the “from” environment to the “to” environment, because they contain references to the bookmark IDs in the original bookmark data.
PROCEDURES
Update the <user>.xml files in the “to” environment:
Copy the <user>.xml files from LAWDIR/persistdata/lawson/portal/data/users directory to the “to” environment. Or you can remove all of the <user>.xml files in the “to” environment; they will be recreated when the user logs in and receives or assigns content. DO NOT DELETE THE default.xml file in LAWDIR/persistdata/lawson/portal/data/users.
Update the <portalrole>.xml files in the “to” environment:
Copy the <portalrole>.xml files from LAWDIR/persistdata/lawson/portal/data/roles directory to the “to” environment. Or you can remove all of the bookmark locks in the “to” environment’s <portalrole>.xml files and reapply the locks later.
Backup and delete existing data in the “to” environment:
Perform the following tasks in the “to” environment, where the bookmark data will be copied to.
Back up and delete the LOBKMARK records (in the LOGAN product line/data area).
Back up and delete the LOGRPBKMRK records (in the LOGAN product line/data area).
Back up and delete the LOUSRBKOPT records (in the LOGAN product line/data area).
Back up and delete the LOUSRBKMRK records (in the LOGAN product line/data area).
Back up and delete the LOBKCONFIG records (in the LOGAN product line/data area).
Back up and delete the SISETUP records (in the LOGAN product line/data area).
Create dump files of the existing data in the “from” environment:
Perform the following tasks in the “from” environment, where the bookmark data will be copied from.
dbdump -d logan lobkmark > lobkmark.dmp
dbdump -d logan lobkconfig > lobkconfig.dmp
dbdump -d logan sisetup > sisetup.dmp
dbdump -d logan logrpbkmrk > logrpbkmrk.dmp
dbdump -d logan lousrbkmrk > lousrbkmrk.dmp
dbdump -d logan lousrbkopt > lousrbkopt.dmp
If the “from” and “to” environments are on separate servers, copy the .dmp files to the “to” server.
Load the data from the dump files in the “to” environment:
Perform the following tasks in the “to” environment, where the bookmark data will be copied to.
dbload logan lobkmark lobkmark.dmp
dbload logan lobkconfig lobkconfig.dmp
dbload logan sisetup sisetup.dmp
dbload logan logrpbkmrk logrpbkmrk.dmp
dbload logan lousrbkmrk lousrbkmrk.dmp
dbload logan lousrbkopt lousrbkopt.dmp
Update the <portalrole>.xml files in the “to” environment:
Reapply bookmark locks if you removed them previously. If you copied the files over, you can skip this step.
Refresh the IOS cache:
Run IOSCacheRefresh (“Refresh IOS Cache” admin task).
Verify bookmark data is not corrupt:
Log into Portal and go to Bookmark Manager (“Manage Bookmarks” admin task). Add a new Top-Level bookmark. Then verify that you can see it at the top of the list of bookmarks in the Bookmark Manager. This is confirmation that the bookmarks are loaded properly and the data is not corrupted. If you don’t see it at all or it was added under another bookmark, then your bookmark data is corrupt and Support should be engaged.
Test the data from a user perspective:
Log into Portal as a user (or have a user test) and verify that the bookmarks in the “to” environment look the same as the “from” environment. If you copied the <user>.xml files over, the user shouldn’t notice any differences.
If you are like me, you find it frustrating that you can only see the scheduled IPA processes that you created. As an administrator, this can make tracking down process triggers quite difficult. It is also difficult to determine which process is triggered by which schedule in the front-end Rich Client. I have created a query that can show all schedules, and which process is triggered by the schedule. Feel free to take this and make it useful to you!
ORACLE
SELECT NAME, SUBSTR(AAR.ACTIONPARAMETERS, INSTR(AAR.ACTIONPARAMETERS, ‘<field name=”FlowName” id=”FlowName”><![CDATA[‘)+46,
INSTR(AAR.ACTIONPARAMETERS, ‘]]>’, INSTR(AAR.ACTIONPARAMETERS, ‘<field name=”FlowName” id=”FlowName”><![CDATA[‘)) –
(INSTR(AAR.ACTIONPARAMETERS, ‘<field name=”FlowName” id=”FlowName”><![CDATA[‘)+46)) FLOW,
SCHEDULEWEEKDAY,SCHEDULEHOUR,SCHEDULEMINUTE,TIMETOEXEC
from LMK_LAWSON.ASYNCACTIONREQUEST ASYNCACTIONREQUEST INNER JOIN
LMK_LAWSON.S$AAR AAR ON AAR.ASYNCACTIONREQUEST = ASYNCACTIONREQUEST.ASYNCACTIONREQUEST
WHERE “GROUP” = ‘pfi’
ORDER BY NAME
SQL SERVER
SELECT NAME, RIGHT(LEFT(AAR.ACTIONPARAMETERS, CHARINDEX(‘]]>’, AAR.ACTIONPARAMETERS, CHARINDEX(‘<field name=”FlowName” id=”FlowName”><![CDATA[‘, AAR.ACTIONPARAMETERS)+46)-1),
CHARINDEX(‘]]>’, AAR.ACTIONPARAMETERS, CHARINDEX(‘<field name=”FlowName” id=”FlowName”><![CDATA[‘, AAR.ACTIONPARAMETERS)+46) –
(CHARINDEX(‘<field name=”FlowName” id=”FlowName”><![CDATA[‘, AAR.ACTIONPARAMETERS)+49)),
SCHEDULEWEEKDAY,SCHEDULEHOUR,SCHEDULEMINUTE,TIMETOEXEC
from LMKPRODGEN.ASYNCACTIONREQUEST ASYNCACTIONREQUEST INNER JOIN
LMKPRODGEN.S$AAR AAR ON AAR.ASYNCACTIONREQUEST = ASYNCACTIONREQUEST.ASYNCACTIONREQUEST
WHERE “GROUP” = ‘pfi’
ORDER BY NAME
Nogalis would love to assist with all your IPA needs! We have some great resources on hand who can provide managed services of your system, training, and project work. Check out our managed services program today!
With Infor Process Automation, there are several ways to trigger a Process. This article will discuss how to trigger a custom process using 4GL.
First, create your Process. Test it and upload it to the Process Server.
Next, in IPA Rich Client (or the LPA Admin tool), you must create a Service Definition (Process Server Administrator > Administration > Scheduling > By Service Definition) and attach a Process to it. There, you will configure any variables that should be passed to the process.
Now, let’s create the trigger in the 4GL program. This will be either a custom program your organization has created, or an existing Lawson form.
The first step is to initialize the WF SERVICE.
INITIALIZE WFAPI-INPUT
INITIALIZE WFAPI-OUTPUT
MOVE <serviceNameString> TO WFAPI-I-SERVICE
PERFORM 1000-WF-SERVICE
***Verify that the return code != 0 (anything other than 0 indicates error)
IF (WFAPI-O-RETURN-CODE NOT = ZEROS)
GO TO 600-END
Next, create the Work Unit
MOVE WFAPI-O-SERVICE TO WFAPI-I-SERVICE
MOVE <workTitleString> to WFAPI-I-WORK-TITLE
INITIALIZE WFAPI-OUTPUT
PERFORM 1000-WF-CREATE-SETUP
Now, populate your variables. You can have an unlimited number of variables per Service Definition, but you must populate them in groups of 10 (i.e. perform the 1000-WF-ADD-VAR-SETUP for each group of 10)
INITIALIZE WFAPI-INPUT
MOVE WFAPI-O-WORKUNIT TO WFAPI-I-WORKUNIT
MOVE “company” TO WFAPI-I-VARIABLE-NAME (1)
MOVE HR11F1-EMP-COMPANY TO WFAPI-I-VARIABLE-VAL (1)
MOVE “I” TO WFAPI-I-VARIABLE-TYPE (1)
MOVE “employee” TO WFAPI-I-VARIABLE-NAME (2)
MOVE HR11F1-EMP-EMPLOYEE TO WFAPI-I-VARIABLE-VAL (2)
MOVE “I” TO WFAPI-I-VARIABLE-TYPE (2)
INITIALIZE WFAPI-OUTPUT
PERFORM 1000-WF-ADD-VAR-SETUP
Finally, release the Work Unit
INITIALIZE WFAPI-INPUT
MOVE WFAPI-O-WORKUNIT TO WFAPI-I-WORKUNIT
INITIALIZE WFAPI-OUTPUT
PERFORM 1000-WF-RELEASE-SETUP
Nogalis would love to assist with all your IPA needs! We have some great resources on hand who can provide managed services of your system, training, and project work. Check out our managed services program today!
Common error with conversion upload file: “CSV Read Error: Bad Field Data Format for Fld: 5 on Record Nbr 1”.
This could happen for example when running say PR530 in Lawson.
There are a couple ways to resolve this issue:
Resolution #1:
Fix the headers of the file by removing spaces or bad characters that the program doesn’t like in the file you’re using for upload.
Also keep in mind that if there are problems with the header records, you will be able to see the exact errors in the examine log after you run the job (Job Scheduler > Completed Queue > click on completed job and then select ACTIONS from the menu and highlight Examine Log). This gives more detail as to the exact CSV Header Read errors.
Resolution #2:
For the job that’s throwing this error, in LID, go to jobdef and enter the username and jobname
Move down to the form field >> F6 >> C. CSV File Attributes (PR530 is the example below):
Turn off File Header and Xlt Header Names and save.
When you run the job again, it will ignore the headers and process the data so make sure the columns are correct. This has to be done per user that runs the job.
Good luck!
Here are a few mitigation procedures to use when Landmark applications are crashing or performing very slowly and throwing core dumps on the server.
First, check the environment.properties file. Validate that file is pointing to the correct version/location of Java. Sometimes, Java is updated on the server, and the environment.properties file is not changed accordingly. It’s not a guaranteed issue, but should be validated and resolved if found.
Secondly, make sure there is adequate heap space provided to the jvm. Here are some examples for heavy processing:
com.lawson.was.nodeagent.jvm-init-mem=1024
com.lawson.was.nodeagent.jvm-max-mem=3072
com.lawson.was.dmgr.jvm-max-mem=4096
com.lawson.was.dmgr.jvm-init-mem=2048
com.lawson.was.<app Server>.jvm-init-mem=3072
com.lawson.was.<app server>.jvm-max-mem=12228
àThese values will be deployed every time a CU is applied, so you don’t have to keep updating them
Be sure to recycle the Landmark application server once these changes are made.
Note that clustering Landmark WebSphere would not do any good. Landmark applications behave very differently from Lawson applications, and it would only waste resources to vertically scale.
A user accesses their inbasket they can see the workunit and the associated data, however when they approve the requisition it leaves their inbasket view in portal with a success message. The workunit does not move on, nor does it show action has been taken within the workunit.
Also, no actions can be taken from that workunit.
Error from the lps.log
Thu Jun 16 14:19:48 PDT 2016 – Message “com.lawson.bpm.errors.ErrorMessages.com.lawson.rdtech.framework.ProcessFlowException: Unknown security exception has occurred.
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutor.getActorContext(ProcessFlowExecutor.java:651)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutor.getActorContext(ProcessFlowExecutor.java:617)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutor.processProcessFlowQueue(ProcessFlowExecutor.java:409)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutor._executeFlowPrivate(ProcessFlowExecutor.java:332)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutor._executeFlow(ProcessFlowExecutor.java:212)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutorProcessorImpl.sweepPrivate(ProcessFlowExecutorProcessorImpl.java:284)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutorProcessorImpl.processMessage(ProcessFlowExecutorProcessorImpl.java:81)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutorProcessorImpl.runPrivate(ProcessFlowExecutorProcessorImpl.java:186)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutorProcessorImpl.run(ProcessFlowExecutorProcessorImpl.java:150)
at java.lang.Thread.run(Thread.java:744)
Caused by: com.lawson.security.interfaces.GeneralLawsonSecurityException: Unknown security exception has occurred.
at com.lawson.interfaces.security.LawsonSecurityFactory.setRunAsUserOnDuplicateContext(LawsonSecurityFactory.java:1549)
at com.lawson.bpm.eprocessserver.grid.ProcessFlowExecutor.getActorContext(ProcessFlowExecutor.java:634)
… 9 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.lawson.interfaces.security.LawsonSecurityFactory.setRunAsUserOnDuplicateContext(LawsonSecurityFactory.java:1545)
… 10 more
Caused by: com.lawson.security.authen.SecurityAuthenException: com.lawson.security.authen.LawsonUserContextImpl.security.authen.actor_does_not_support_run_as
at com.lawson.security.authen.LawsonUserContextImpl.setRunAsUserOnContext(LawsonUserContextImpl.java:1497)
at com.lawson.security.authen.LawsonUserContextImpl.setRunAsUserOnContext(LawsonUserContextImpl.java:1460)
at com.lawson.security.authen.LawsonUserContextImpl.setRunAsUserOnContext(LawsonUserContextImpl.java:1451)
at com.lawson.security.authen.LawsonUserContextImpl.setRunAsUserOnDuplicateContext(LawsonUserContextImpl.java:1592)
at com.lawson.security.authen.DataCtxUserCtxWrapper.setRunAsUserOnDuplicateContext(DataCtxUserCtxWrapper.java:432)
… 15 more
” not found.
Resolution:
The lawson user does not have Run As enabled. Infor Security Administrator by default has the enable Run As set to “NO” for the lawson user. When a change is done on the lawson user and saved in the Infor Security Administrator the run as gets flipped back to “NO” in Landmark. Go to Infor Security Administrator and change the Enable Run As to “YES” save it. Restart LPA and Async grid nodes.
Keywords: