How to install Oracle ODBC client for windows

It is ridiculously difficult and ambiguous how you’re supposed to get the Oracle ODBC driver installed on your Windows machine. It’s certainly not a simple task. Well, since I figured it out today I thought I would document it.

First you need to download all three of these files from oracle
1) instantclient-basic-win-x86-64-
2) instantclient-odbc-win-x86-64-
3) instantclient-sdk-win-x86-64-
Extract the contents of all the above into a single directory called D:\Oracle\Ora11\
That directory doesn’t exist and you’ll have to create it first.
Now with all the files copied in there, create the following windows environment variables:
1) OCI_LIB64 = C:\Oracle\Ora11
2) ORACLE_HOME = C:\Oracle\Ora11
3) TNS_ADMIN = C:\Oracle\Ora11
Also, add “C:\Oracle\Ora11” to your PATH variable.
At this point you MUST reboot. No other way around it. Simply restarting the explorer task won’t do.
After the reboot, create the following directory:  C:\Oracle\Ora11\Network\Admin
Place your tnsnames.ora file inside this new directory.
Open a cmd prompt as administrator.
cd to the C:\Oracle\Ora11
On the command prompt type the following command:
You should receive a message that says: “Oracle ODBC Driver is installed successfully.”
The new ODBC driver is only in the ODBC 64-bit data sources.
You can now add a new one like so:
Just remember that your tnsnames.ora file must contain the service you’re trying to connect to.

What is ADFS?

There has been a lot of confusion in the Infor client community lately over what ADFS is and what the impact of implementing it will be on the organization as a whole.
Active Directory Federation Services (ADFS) is a Microsoft solution created to facilitate Single Sign-On. It provides user with authenticated access to applications like Lawson without the need to provide the password information to the application.
ADFS manages user authentication through a service hosted between the active directory and the target application. It grants access to application users by using Federated trust. The users can then authenticate their identity through Single Sign-On without having to do so on the application itself. The authentication process is usually as follows:
1) The user navigates to the Lawson URL
2) The unauthenticated user is re-directed to the ADFS service
3) The user signs into ADFS
4) ADFS service authenticates the user via the Active Directory
5) The user is then given an authentication claim (in the form of a cookie) by the ADFS
6) The user is forwarded to the Lawson application with the claim which either grants or denies access based on the federated trust service
Note: The Lawson Server never sees the password information which in the case of external applications (like a cloud implementation) is a lot more secure.
What are some drawbacks of implementing ADFS?
Although ADFS is a new requirement, it comes with a few small drawbacks that you should consider:
– The additional server license and maintenance – You will need an additional server (likely one per environment) to host ADFS
– ADFS is actually somewhat complex and this new skill set can create a new challenge for smaller clients who aren’t already using ADFS for other applications
– A standard ADFS installation is not all that secure and several steps should be taken to ensure good security. Microsoft provides these best practices recommendations:
There is also a great free e-book published by Microsoft about claims-based identity and access control:
To find out more about ADFS and how it can impact your organization, join our webinars or contact us.

User sqlplus with Lawson (Tips and Tricks)

Most of customers have by now switched over to a Windows / SQL server environment, but we still have several customers who have stayed on Oracle for their Database needs. This mostly stems from having the Oracle skill set in-house as there is really no other advantage to staying on Oracle once you have moved over to Windows.

Often when there are troubles with connecting to the application, it is relevant to test the connection the database from the server itself. Of course there are several ways to do this, but test fastest way is to do so directly from the command prompt in LID as it doesn’t require any additional setup or software. But it is easy to forget how this is done so we decided to write this quick article to document this very simple syntax.
The utility we’re going to use is called sqlplus and it should already installed on your LSF application server. Simply login to the server using LID and on the command prompt type in the following command:
sqlplus <username>/<password>@dbserverName
If you have the correct username and password, and the server is responding, you will get a SQL> prompt on which you can run any query you want. Here’s an example:
However if you type in the incorrect username:
And finally, if you have the incorrect server name or the server is not responding, the prompt will be suspended for several seconds and you will see the following message:
A few small notes about using sqlplus:
  • Be sure to use a semicolon to end your statements. Otherwise the application doesn’t know when to run your query.
  • Make sure the environment variable %ORACLE_HOME% is set correctly. ($ORACLE_HOME on Unix):
  • To exit sqlplus user the “quit” command
  • The SQL buffer contains the last statement you ran, and you can run the previous query again by simply typing “RUN”  and hitting enter.
  • User the LIST command to see a list of your most recently executed SQL commands.
  • “HELP INDEX” shows a list of possible commands
  • To launch a sql script simply put the “@” symbol in front of the file name and execute it. like: @script.sql or even @/path/to/script.sql
  • You can have a multi-line sql statement.
  • The “SHOW USER” command prints the name of the Oracle user you’re logged in as
  • The “SHOW ALL” command prints all the current settings to the screen.

How to test your SOAP Web Service calls using Postman

If you have worked with Web Services you can appreciate the ability to test your web service calls quickly and efficiently without a lot of programming. This is exactly what Postman was meant for. When you’re building SOAP service calls with IPA it can really make your life a lot easier if you have this particular skill and tool.
The secret sauce here is how you form the actual request. Just follow these steps:
  1. Set the method to POST
  2. Paste your URL in the “Enter request URL” field.
  3. Click the “Params” button and enter any parameters you may have. If you’re using IPA you probably don’t have any parameters to enter here and they’re all included in the body of your request.
  4. Click the Authorization tab and enter your authorization information. If you have user / password this is likely “Basic Authentication”
  5. Click the “Body” tab.
    • Select “raw”
    • From the drop-down on the right select “XML (text/xml)”
    • Paste your entire soap envelope into the body text area
  6. Click Send
That should do it. You’l be able to see the status code (200 OK as shown below) and the time it takes to make the call (570 ms below)
Then in the body of your response you can see what the request returns which is pretty great to see if you’re trying to get a feel for what you’re dealing with.

How to make an HTTP request from command line

There are many instances where you are making HTTP(s) requests from within your code or IPA flow to a Web Service or alike but you cannot RDP to the server to make sure HTTP requests will actually work once they run there. It would also be really nice to see the response code and return message in case you’re doing something wrong. This is of course almost always the case with the Infor Cloud as you are not able to remote to the server and test your request on a browser. But luckily you still have access to LID or IPA. Turns out there is a clean and easy way to run a command from LID that will simulate the HTTP request and bring back the header and body information. One such way is by using PowerShell. The following powershell command retrieves the content of the Infor website for example:
powershell -c “Invoke-WebRequest -UseBasicParsing -Uri” | lashow
You can type that command on your LID command prompt  and you should see the response from as shown below:
Notice: In the command below I have piped the output to lashow for easier viewing
Alternative, if you do not have access to LID, you can use and IPA flow System node to run the command, and subsequently write the output of the command to a file that you can then view.
This is how we determined an issue we were having with making calls to the ExpenseWire web service from Infor Cloud. The server could not create SSL/TLS secure channel to ExpenseWire via https and resulted in this error in LID:
This was especially useful because the IPA flow that was making the Web Service call was simply returning the message:
“Received fatal alert: handshake_failure Message=Could not send Message.”

Lawson Managed Service – On Premise (On-Prem)

We traditionally think of managed service as a hosted solution. But what about all the applications that must stay in-house? Depending on the industry there are state or federal regulations and compliance matters to consider when deciding where and how to host your application. Your PCI data for example must stay confined to a very specific set of parameters well defined by Data Security Standards (PCI DSS). HIPAA (Health Insurance Portability and Accountability Act of 1996) provides data privacy and security provisions for safeguarding medical information and has specific requirements for how and where you can host your data and applications that can get access to that data. You can’t just hire any managed service provider to host all your applications. One option is to host the application and data On-Prem but partner with a managed service provider who can manage and maintain it for you.

Managing an enterprise application boils down to a few major areas of concern:

  • Application Server Hardware or VM Hardware
  • Networking Equipment (Routers, Switches, etc.)
  • Operating System (Windows, UNIX, Linux)
  • The application itself
  • The Database
  • File Storage
  • The users

Typically, managed service providers would handle all of the above and send you a monthly bill for using the application which mainly boils down to owning a SaaS application. However in the On-Prem scenario the only part the MSP is responsible for is the application. That means that the organization is still in charge of everything but the application itself. This is the most common managed service scenario for most of our clients at Nogalis. The client manages the hardware and network, the OS patching, the Database Administration, and Nogalis takes care of the application and the users.

There are several factors that determine the success of an on-prem MSP engagement:

  • Remote access setup from the provider to the application and related servers.
  • A web-based ticket/issue tracking system that’s accessible by bother parties (i.e. ServiceNow)
  • A single point of contact on the client side that can take ownership of the relationship with the MSP. This ensures direct and efficient communication between both parties.
  • A client coordinator on the MSP side to facilitate projects and ongoing communication.
  • Dedicated MSP resources that are assigned to the client. Avoid providers who cannot commit a specific resource as having dedicated resources can make the biggest difference in the quality of work delivered.
  • Availability of a fast and reliable teleconferencing and screensharing service such as WebEx.
  • Weekly status meetings to discuss upcoming change control items and upcoming client needs.
  • Detailed breakdown of time spent on delivering services down to the 15-minute increment.

How to tune the performance of a Lawson 4GL program

Lawson programs, especially batch jobs, can sometimes take hours to complete. In rare instances, a badly written 4GL batch job can even take days to complete depending on the number of records it has to process and how it goes about doing so. Depending on your skill set, you may be able to optimize the code directly and use a debugger to find out how to speed things up. But if you want some statistics about what the program is doing and a quick shortcut, then there’s a utility for that.

The utility is actually several different utilities wrapped into one.

dbadmin utility to set some parameters. The main parameter you want to set is the TIMESTATS parameter. The timestats function is activated using the dbadmin utility.

Before you go changing stuff, to show current settings, run ‘dbadmin get’ from a LID command prompt or a qsh command prompt in System i.

dbadmin get
Current Value for REUSE=ON
Current Value for DEBUG=OFF
Current Value for DBAPIERRORS=ON
Current Value for TIMESTATS=OFF
Current Value for USERFILTER=
Current Value for PROGRAMFILTER=
Current Value for DATAREAFILTER=
Current Value for TIMESTATSDIR=/tmp
Current Value for IDLETIME=1

Save this off for future reference.

#To enable timestats, run the following commands:

dbadmin set timestats on
dbadmin set programfilter programcode   (optional, specify a program code. e.g. AP175.  The default is all programs)
dbadmin set timestatsdir pathname  (optional, ex. /home/username or C:\timestats.  If not specified, stats files are created in /tmp or %TMPDIR% for Windows)
dbadmin set dataareafilter productline  (optional, specify a productline name.  The default is all productlines)
dbadmin set userfilter username   (optional, the default is all users)
dbadmin set reuse off

#Clear the active database drivers so that the changes become effective
dbadmin idleall
tmcontrol -rp productline programcode  (For online programs only)

#Run ‘dbadmin get’ after setting the options to check that the desired options are enabled.

#After should look like this:

Current Value for REUSE=OFF
Current Value for DEBUG=OFF
Current Value for DBAPIERRORS=ON
Current Value for TIMESTATS=ON
Current Value for USERFILTER=
Current Value for PROGRAMFILTER=HR211
Current Value for DATAREAFILTER=test
Current Value for TIMESTATSDIR=d:\lawson\temp\timstats
Current Value for IDLETIME=1

Now you’re ready to submit your job again. Once the job is submitted, you should be able to see the stat files get created in the timestatsdir directory.
Wait until the job has completed before viewing the file(s).

You can view the files after the job is complete but they aren’t all that easy to understand. To make them easier to digest:

Go to the stats directory and run:
analyze_stats -o > stats.out

You’ll notice that you now have two new files in this directory.

stats.out and a cfg file.

The stats.out file will give you a really great view of what’s going on with your program while the cfg file can be place in the xxsrc directory of your code and compiled with the program to optimize it based on the timestats results.

After you’re done:

To turn disable timestats and re-enable driver re-use:

dbadmin set timestats off
dbadmin set reuse on


How to cut consulting costs by up to 30%

Those of you who know me also know that I’ve been a business application consultant for the past 16+ years. In that time, I’ve helped over 400 companies manage their business applications and have logged over 50,000 hours of billable time to clients. As much as I try to be a good steward of my client’s project budget, I can’t help them when they don’t want to be helped. Over the years I have seen so much time go wasted and many projects go over budget, all of which could have been prevented by taking a few measures early on. I thought I would put together some points to help the people in charge avoid some of the pitfalls and take advantage of my years of experience seeing first-hand how projects can be wasteful.

I’m going to list these points in no particular order below:

  1. Travel vs. Remote: Do you really need your consultants to be on site? Do you conceive that somehow there’s a difference between him being in the same building or several hundred miles away? Travel is not only expensive from a hard cost perspective but also from a time perspective. The hard cost of travel for most of the guys I have managed over the years is around $2000/week. That’s Over $100k per year per consultant. You can hire a fulltime employee for that much. Not only that, travel time eats into project time and cuts down on productivity tremendously. The average traveling consultant is on the road 12-16 hours per week (between booking, driving, airport time, air travel time, hotel…). That’s 12-16 hours they could be spending on the project to get it done faster. That’s over 800 hours per year just wasted. Not to mention they’re now less happy to work for you because they’re away from their family, their warm bed, their home, and have to spend several hours a week dealing with travel related stress. The true cost of a traveling consultant is several times the actual travel cost. So if you can help it, arrange for your consultants to work remotely.
  2. VPN: Working via a remote connection has gotten so much easier over the past decade. Nowadays I can work on any customer’s private network with a quick click. But there are several dozen flavors of VPN software and they all vary from place to place. Some make life easier and some make life miserable for your consultants resulting in several of hours of wasted time each week. Here are some of the common ones I have to use each week and how I feel about them:
    1. VMware Horizon View Client – I absolutely love this method. I basically get a VM on the client’s network that I can access with a click of an icon. From there I can jump to whatever machine I want and get my work done. What I love about it is it doesn’t mess with my local internet connection at all and I can be logged into several different clients at once with this client.
    2. Citrix Receiver – This is one of my least favorite things to use. Not only is it typically painfully slow, it’s also very confusing and it requires several bits and pieces to be installed. And I’m sure you can set good user/password policies but most of the time it seems I have to reset my credentials way too often, resulting in support calls and several hours can be burned this way. I’m being generous with 2 stars. 
    3. Cisco AnyConnect – I don’t really like client VPNs in general but the Cisco AnyConnect VPN works really well. If set up correctly on the server side, it can be really solid and never drops off. Only problem with client VPN software like this is you can only log onto one client at a time which is pretty badly limiting when you have multiple accounts to maintain. 
    4. GlobalProtect (by Palo Alto Networks) – This one seems to be okay but somewhat finicky. For some reason I can’t connect to it using my Wi-Fi and I have to be hardwired to get it to work. Although I’m confident this is just an issue on my side, I haven’t had that problem with others software so … And again, having to install a local client prevents me from logging into other client’s which is a negative.
    5. SecureLink – This thing has wasted so much of my time over the years. I hate it for so many reasons. For one it’s Java based and super confusing and picky. I can only get it to work from Internet Explorer and even then I can’t seem to control the resolution of the RDP session or the login credentials leading to several issues. I give this one 2 of 5 stars. 
    6. F5 Networks SSL VPN – This is a pretty good way to go and it seems to work well until it doesn’t. But overall no issues. Really nice because there’s nothing to install client side and it’s SSL only which leaves me intact to work on other stuff. 
    7. Juniper Networks SSL VPN – Another relatively nice VPN application that allows me to log in and get work done relatively quickly but the fact that it requires a local install and several versions makes it difficult to work with and get right. But overall it works well so I’m giving it 3 stars.
  3. Security – This is a touchy subject. Most companies these days would rather err on the side of caution and throw money into a proverbial furnace rather than open up security any wider than the minimum allowable to let you get your work done. That’s all fine and good until it starts to be wasteful. Case in point, the ability to do ftp. One of the first things we typically have to do as a part of an install of Lawson is download a few GB of software. Well, most of the sites we download from use FTP as their transfer protocol. If the customer has the ftp port completely shut off then we have to download several GB into a local machine first, and then upload it to the server over a super slow VPN connection. What should take an hour can now take days to complete. This is just one example of security measures that don’t make much sense. FTP on its face is no more unsecure than HTTP but security consultants are quick to recommend the port be blocked. Mind you, we’re not talking about transferring sensitive information, this is just software downloads. In my experience, unnecessary security measures can hinder progress and efficiency by as much as 30-40% depending on the project and the restrictions.
  4. Project management and issue tracking software – I have spent a lot of time with every software and every methodology on the market. In the past 16+ years the best methodology for actually getting work done has turned out to be Agile Scrum. There are several dozen applications that can help with managing a project and they all work well if you use them well. But please use them well and hold everyone accountable. Invest in something like Pivotal Tracker, Asana, Trello or Jira and just stick to it and make sure everyone is using it correctly. There is no need to ask me for a status report when these tools are being used, That’s a huge waste of time that no one ever reads. Keeping track of issues, features, and bugs any other way in today’s world should be punishable by termination. We use a combination of JIRA and Trello at Nogalis depending on whether the project requires client interaction or not. We love both tools equally. 
  5. A solid single point of contact – More than everything else I’ve mentioned above, this one has probably saved the most amount of time. When clients appoint someone to work as my single point of contact, things just move a lot faster. This is important when dealing with access issues, change control decisions, user testing, documentation, getting consensus from upper management. When there’s a single point of contact we can interface with who knows the organization well, we don’t have to email a group of people or get on the phone for hours trying to figure out how to get a 5-minute task done. So assign a competent single point of contact to work with your consultants and make sure he/she has a bit of autonomy to make easy decisions. The companion point here is about putting too many people in charge. If I have to answer to more than one person, then I have to explain everything to two people, and ask permission from two people, and get feedback from two people. That’s double the time for half the productivity.
  6. Access – A lot of my time is wasted because I don’t have access. I would easily sign whatever document you want me to sign and give you whatever guarantees you need that I won’t do what I’m not supposed to. The way I see it, if you don’t trust me, you shouldn’t have hired me. When I don’t have proper access, I have to come up with alternative ways to get things done. That means making calls, getting others involved, sending emails, and talking to your support line or just compromising and doing things the unintended way. Just give me the access I asked for to get the job done and then remove it when I’m done with the job. You’ll save yourself a ton of money that way.
  7. Tools of the trade – If you’re dealing with consultants, chances are you’re paying $100-$300/hour for the work being done. It may be a good idea to find out what tools they prefer to work with and purchase the tools ahead of time for them. I use a text editor called EditPlus. It’s like $30. But it can save me easily two hours of work each day vs an editor I don’t have experience with. You do the math. (Also, give me enough admin privileges to install it)
  8. A clear SOW – Before starting any substantial project, your first goal should be to determine exactly what everyone is working on. Taking the time to define roles and responsibilities well ahead of time can save you tremendous amount of wasted time. I recall my first project as an independent consultant. I showed up at the client site in New Hampshire eager to work, but no one knew what I was supposed to work on for the first two weeks. So I just sat around and bugged people for a response. This was before Facebook ;). Some of the PM tools I mentioned above can ensure you never waste your resources this way and be prepared for them when then start.

There are probably dozens of other tips I could give you to help save thousands of dollars a year on your consulting costs but the above should be a good start. If you have other suggestions please feel free to leave them in the comment section below.

How to Call AGS and DME Directly in Version 10

Not much has changed in this version of Lawson when it comes to AGS and DME. You can still use them directly whenever you doubt your security access or if you just want to get a sanity check. I have created this article to provide you a quick reference on how they should be run:

To add a Nogalis Currency code:
https://<your server>/servlet/Router/Transaction/Erp?_PDL=TEST&_TKN=CU01.1&_EVT=ADD&_f1=A&_f2=NOG&_f3=Nogalis
To delete the Nogalis Currency code
https://<your server>/servlet/Router/Transaction/Erp?_PDL=TEST&_TKN=CU01.1&_EVT=CHG&_f1=D&_f2=NOG&_f3=Nogalis
To list all currency codes
https://<your server>/servlet/Router/Data/Erp?PROD=TEST&FILE=CUCODES
All of the above examples will need to be modified with your server name and your productline. In the above cases, the TEST productline was used.

How to Implement SSL for Lawson Portal


If you haven’t already done so, implementing SSL after the install is a bit of a black art. Without going into gory detail, here’s a very simple set of steps to follow:

  1. On the LSF server turn off all the services related to lawson aside from ADLDS
  2. Import your new certificate (preferably a wildcard cert) into windows as a personal cert
  3. Create a binding within IIS using the imported certificate on port 443
  4. Load up  your favorite ldap editing tool. We prefer this one.
  5. Under O=lwsnrmdata -> OU=resources you’ll find all your users and services. You’ll want to edit the following identities (or more if you have other service URLs):
    • BPM
    • IOS
    • IOSAdmin
    • LSAdmin
    • mingle
    • mingle_env
    • SSO
    • SSOP
    • Environment
  6. In each of the cases above you’re going to modify the Service URL and any other http protocol. You’ll also want to change the PROTOASSERT attribute from “Use HTTP only” to “Use HTTPS always”.
  7. Then change every relevant entry in %LAWDIR%/system/install.cfg that refers to http, protoassert, or the secure ports. They’re relatively easy to find.
  8. You can now reboot the LSF server and restart your services.
  9. If you have Landmark installed, then bring up the rich client
  10. In the GEN productline, navigate to: “Security System management” > Services
  11. Change every service to HTTPS_ONLY and change the service properties to HTTP Port=-1 and HTTPS Port=443
  12. Change all the relevant entries in system/install.cfg
  13. Reboot the Landmark server
  14. Run all the smoke tests with updated URL to verify everything is working
  15. If you are using inbaskets you’ll want to import your certificates into Websphere as well but that’s a topic for another article