Enterprise leaders are investing heavily in smarter enterprise resource planning (ERP) platforms, yet many still struggle to turn insights into action. In a recent article published by ERP Today, author Tirumala Rao Chimpiri explains why modern ERP systems often produce valuable intelligence without enabling faster decisions. According to Gartner, more than 70% of ERP initiatives may fail to fully meet their original business goals by 2027. The problem isn’t a lack of data, analytics, or AI—it’s structural. Insights often surface in one part of the ERP ecosystem while the authority to act resides elsewhere, creating what Chimpiri calls an “intelligence-to-action gap.” ERP systems are designed to ensure accurate transactions and maintain control across operations. While dashboards, predictive alerts, and automation are increasingly common, they don’t necessarily drive coordinated responses across departments. For example, a staffing risk might surface in HR analytics while budget approvals sit with finance and compliance oversight lies elsewhere—slowing action. Adding AI alone doesn’t fix the issue. Research from McKinsey & Company and International Data Corporation shows many organizations struggle to translate analytics into operational decisions at scale. To address this, Chimpiri introduces the CAIP-HE framework, a structural model that aligns four capabilities: cognitive automation, advanced analytics, integration and interoperability, and personalization. Rather than adding new technology, it helps organizations design how intelligence flows from insight to decision and execution. The takeaway: ERP modernization isn’t just about smarter tools—it’s about structuring systems so insights actually lead to action.

 

For Full Article, Click Here

Lawson File Channels can be useful when importing files from a defined local Landmark directory or a remote one via FTP, SFTP etc. They are especially useful when a file from an outside organization is delayed. Unlike a IPA scheduler that only runs on a set schedule, a file channel will pick up a file that has been delayed for reasons outside of your organizations control.

 

First in Process Server Administrator, you need to go to Administrator >> Channels Administrator (assuming you have permissions)

Then create a new File Channel:

Now that you know how to create one, File Channels essentially act as a source directory path that is scanned every X minutes to then search for a file specified by a File Channel Receiver.

 

Below is an example of a File Channel searching a remote server via FTP:

Since the above is an FTP directory, it has a default directory upon connection. You then define the Source File Directory after that. If the FTP server connects and starts on ..\inbound directory, and your source directory is ..\inbound\APFinance, then Source File Directory is simple APFinance

Error file and In-Progress File Directory are always directories on the local Landmark server so make sure you establish those directories first.

 

Additionally, you can change the File Channel Type. Local is Landmark, Amazon S3 is a specified Amazon server instance:

All other Parameters are self-explanatory so let’s move on to File Channel Receivers.

You can have multiple File Channel Receivers defining different file names for every File Channel. This way if you have one Source Directory, you can search different file names like ACH_*.txt, APC_*.xml etc.

The “Process” field is the IPA process that will be processing the file.

 

The Data field is also important. File Data is the ideal selection.

File Name: Triggers one workunit with just the file name.

File Data: Triggers one workunit with the file’s entire contents.

File Line: Triggers a separate workunit for each line of the file.

Once the file is picked up, it moves it to the in-progress directory while it gets processed regardless the the Data field setting. The in-progress directory acts like an archive folder in this way.

 

That’s it, File Channels are great when receiving a file from an outside organization that sends it on a schedule. That way if it happens to be delayed, the File Channel will still scan it when it finally gets to Lawson.

 

Cloud ERP (enterprise resource planning) migrations aren’t just about moving systems—they’re about cleaning up your data first. In an article ERP Today article written by Mageshwaran Subramanian, it argues that companies often rush to go-live and overlook the most critical factor: data quality. A cloud migration isn’t a simple “lift and shift.” It’s a chance to rethink, filter, and restructure data so the new system doesn’t inherit years of technical debt—especially in an era where AI-readiness is a major selling point. One key principle is shifting validation early. Borrowing from “shift-left” thinking in software development, Subramanian stresses that data should be validated at extraction—not after loading. The longer bad data lingers in the pipeline, the more expensive and disruptive it becomes. Archiving is another big lever. In on-prem systems, storing everything felt harmless. But cloud ERP pricing changes that equation. Migrating outdated purchase orders and decades-old records increases subscription costs and hurts performance. The smarter move? Separate active data (what runs the business today) from historical data (better stored in lower-cost archives or data lakes). He also highlights the importance of defining a “golden record.” Modern cloud platforms rely on unified entity models, so duplicate vendor/customer records must be consolidated with clear de-duplication and survivorship rules. The bottom line is t hat AI won’t fix messy data—it will amplify it. Strong governance, clean master data, and disciplined archiving turn a cloud migration from a software upgrade into a long-term strategic foundation.

For Full Article, Click Here

Modern businesses move fast—but many ERP systems don’t. In a recently published  article from Forbes, council member and Dynamics Square co-founder Manish Goyal argues that traditional ERP (enterprise resource planning) systems were built for stability and control, not constant change. As markets, regulations, and customer expectations evolve, companies are discovering their ERP platforms can’t keep up. In fact, research from McKinsey & Company shows only about 20% of organizations capture more than half of their expected ERP benefits—often because they treat ERP as a technical project instead of a strategic foundation.

Goyal’s solution is to rethink ERP as composable, configurable, and continuously evolving. Instead of relying on monolithic systems, he points to the idea of composable ERP—an approach championed by Gartner—where modular components can be assembled and reassembled as business needs change. The goal isn’t to eliminate a stable core, but to separate what must stay consistent from what can flex. He also warns against heavy customization. Research from the University of Agder and Deloitte suggests too much custom code increases cost and complexity. Instead, organizations should prioritize configurability—using business rules, APIs, and low-code tools to adapt processes without breaking the system. Most importantly, ERP shouldn’t be treated as a one-time transformation. It should evolve continuously through disciplined governance, cross-functional oversight, and incremental updates. The takeaway is to stop thinking of ERP as a project you “finish.” Start treating it as a platform you continuously refine to keep pace with strategy and change.

 

For Full Article, Click Here

When working with Aurora MySQL, it’s common to assume that running a TRUNCATE TABLE will completely clear out space and return your database to its pre-load size. Unfortunately, that’s not quite how it works — and it often surprises people during large migrations.

Here’s why truncating doesn’t always free space, and what you should do instead.

TRUNCATE vs. Space Reclamation

In MySQL (and Aurora MySQL), TRUNCATE TABLE is a fast operation that:

  • Deletes all rows from a table.
  • Resets the auto-increment counter.
  • Removes and recreates the underlying table definition internally.

However, with InnoDB tables, the physical storage file is not always shrunk automatically. Instead:

  • Pages inside the tablespace are marked as free.
  • The file size on disk (or Aurora’s volume usage) often stays the same.

That means your database volume won’t shrink even if the table is empty.

The Role of OPTIMIZE TABLE

To actually reclaim space after large deletes or truncations, you need to run:

OPTIMIZE TABLE your_table;

What this does:

  • Creates a new copy of the table with active rows.
  • Rebuilds indexes.
  • Frees up unused pages and defragments data.
  • Releases the unused space back to Aurora’s volume.

In practice, many users see 20–40% of their allocated storage reclaimed after running OPTIMIZE TABLE on heavily churned datasets.

Aurora-Specific Gotchas

  1. Shared Storage Model
    Aurora uses a distributed storage layer, so file-per-table semantics can be a bit confusing. Even with innodb_file_per_table=ON, the space isn’t automatically released at the cluster volume level until the table is rebuilt.
  2. Performance Impact
    • OPTIMIZE TABLE is blocking for writes and can run for minutes to hours on very large tables.
    • Plan downtime or run during low-traffic windows.
  3. Monitoring
    Watch the VolumeBytesUsed CloudWatch metric before and after running OPTIMIZE. If you only truncate, the metric won’t move. After optimize, you’ll see a real decrease.

Best Practices

  • For staging or migrations: Always follow TRUNCATE with OPTIMIZE TABLE to reclaim disk space.
  • For production: Schedule optimizations for off-peak hours, and test on smaller tables first.
  • For ongoing operations: If your workload does lots of churn (bulk loads/deletes), regular optimization may be necessary.

Final Takeaway

In Aurora MySQL, TRUNCATE TABLE clears rows but doesn’t guarantee space reclamation. To actually shrink your database footprint, you need to rebuild the table with OPTIMIZE TABLE. Think of it as the “vacuum” step — without it, your cluster will keep carrying dead weight.

Artificial intelligence is turning cloud ERP (enterprise resource planning) into the brain of the modern enterprise. In a recent article posted on Futurism, ERP expert David Deuri explains how AI is no longer an add-on—it’s now built directly into cloud ERP platforms. Instead of just recording transactions, today’s ERP systems can predict outcomes, automate decisions, and actively guide business strategy. The big shift? ERP has moved from being a “system of record” to a “system of intelligence.” Traditional systems told you what already happened. AI-powered ERP tells you what’s likely to happen next—and what you should do about it. Companies are upgrading to cloud ERP not just for scalability, but for smarter capabilities. AI enables predictive forecasting for demand and revenue, automates repetitive tasks like invoice matching and reconciliations, and delivers real-time dashboards with alerts and recommendations. Supply chains benefit from dynamic demand planning and risk detection, while finance teams get faster closes and better anomaly detection. One of the most exciting developments is the rise of AI “agents” inside ERP systems. These agents can monitor workflows, trigger follow-ups on delayed payments, recommend procurement adjustments, and coordinate tasks across departments—without constant human input. Add natural language queries (“Show me this quarter’s revenue variance”), and ERP becomes far more user-friendly. There are still challenges around data quality, integration, and change management. But as Deuri highlights, AI-enabled ERP is quickly becoming a competitive must-have—transforming ERP from a back-office tool into a strategic engine for smarter, faster growth.

 

For Full Article, Click Here

This procedure explains how to subscribe to bookmarks.

You could also use the procedure to subscribe to bookmarks, if you have been given access to the bookmark by your system administrator. If the bookmark appears in your list, you have access to it. (See your system administrator if you need access to a bookmark that does not appear in your Bookmarks list.)

To subscribe to bookmarks

  1. From the Portal home page, select Preferences (check marks icon)>Content.
  2. Click on closed books to expand them, if necessary.
  3. Bookmarks you are subscribed to have check marks in front of them.

If the check mark is removed and you want to add it, click on the box.

Unchecked bookmarks will not appear in your navigation pane or content window.

Cloud migration is no longer a “nice-to-have” IT upgrade—it’s becoming a business necessity in 2026. In a recent article for The AI Journal, technology journalist Erika Balla explains why cloud migration services matter more than ever. For many organizations, the move to the cloud doesn’t start with strategy slides—it starts with frustration: unstable servers, constant restarts, and aging infrastructure that can’t keep up.

At its core, cloud migration means moving applications, data, and infrastructure from on-premises systems to cloud environments. But as Balla points out, it’s rarely a simple lift-and-shift. The process often uncovers outdated systems, bloated data, and legacy apps that need rework before they function properly in the cloud. She outlines common migration types—data, application, infrastructure, and hybrid models—as well as the well-known “6Rs” strategies: rehost, replatform, refactor, repurchase, retire, and retain. Each offers a different path depending on business goals and technical realities.

Of course, the journey isn’t always smooth. Security concerns, compliance requirements, cost overruns, and limited in-house expertise can all complicate migration efforts. That’s why best practices matter: start with a full audit, move in phases, prioritize security, test thoroughly, and monitor cloud spending closely. This is where cloud migration services prove their value. Experienced specialists help reduce downtime, manage risk, ensure compliance, and prevent expensive missteps—freeing internal teams to focus on day-to-day operations.

The payoff? Faster applications, improved scalability, stronger disaster recovery, and more room for innovation. Balla’s message is clear: cloud migration isn’t just a technical shift—it’s a strategic move toward a more resilient, future-ready business.

 

For Full Article, Click Here

Enterprise resource planning (ERP) systems are treasure troves of sensitive business data—which makes securing them a top priority in today’s threat landscape. In a recent TechTarget article, Kevin Beaver of Principle Logic outlines eight practical security best practices for modern ERP environments. With supply chain attacks on the rise and remote access now standard, he argues that ERP security can’t be treated as an afterthought. One key theme: whether your ERP is on-premises or in the cloud, security is still your responsibility. Assuming a SaaS vendor has everything covered is a risky misconception. Even in cloud environments, organizations must stay actively involved in monitoring, access control, and risk management.

Beaver’s first recommendations are foundational: enable multifactor authentication (preferably app- or token-based, not SMS) and enforce strong password policies. Just as critical is staying on top of software updates. Unpatched systems—especially those missing years-old fixes—are easy targets for attackers. Beyond technical controls, he emphasizes people and process. Educate users and make them partners in security. Build and regularly refine a documented incident response plan. Conduct vulnerability scans, penetration tests, and threat modeling to uncover weak spots. For cloud ERP, review vendor SOC 2 reports at a minimum. Ongoing monitoring is also essential. Whether handled in-house or outsourced, organizations need visibility into logs, alerts, and suspicious activity.

Beaver urges companies to take a structured approach: know what assets exist, understand the risks, and act decisively to mitigate them. His bottom line is simple: ERP security doesn’t fix itself. The best time to strengthen it is before a breach forces your hand.

 

For Full Article, Click Here

Follow these steps to learn how to fix certain Lawson LBI reports on dashboard prompting for ID and Password. This typically happens after a migration to a new version of LBI. You’ll need to update these report parameters.

First, edit the report on dashboard:

Next, copy the existing URL:

Run the report in Report Admin with data refresh and grab this part of the URL (see screenshot), append it to the end of the original URL above from the report you’re adding it from:

It should like this:

Now it should load properly without asking for credentials.