Before you spend a dollar on TMS or WMS software, there is pre-work that needs to happen. Most companies skip part of it — some skip all of it. These companies discover too late that they've selected the wrong product and are trying to integrate something that simply wasn't designed for how they actually operate.
In a recent LinkedIn Live with our Principal of Client Advisory & Partnerships, Tony Wayda, and Cornerstone Edge’s Founding Partner, Brian Carlson, discussed TMS and WMS integrations and where companies go wrong and what it costs when they do.
Key takeaways included:
- Define your strategy and detailed requirements before evaluating any software
- Require scripted demos using your actual data and score vendors consistently
- "Integrated" means different things to different vendors—get specifics
- Analyst rankings are a starting point, not a selection filter
- Going live isn't the same as working. Close the go-live gap deliberately
Watch their entire conversation or read a summary below.
The Real Problem Isn't the Software
When an integration fails, the business does not care which system was at fault. No one calls the help desk to report that the WMS passed a malformed shipment detail update, the middleware rejected it, and the TMS never received it. What they know is simple: they cannot see what is on the shipment, customer service cannot answer questions, and the customer has lost visibility as well.
"It's everybody's problem when it doesn't work right," Brian explained.
That's precisely why the selection process deserves more rigor than most companies give it. Failure isn't usually a bad product. More often than not, it's a mismatch between what a product actually does and what the organization assumed it did. That gap is created during evaluation, not during implementation.
Detailed Requirements Are the Foundation
Every vendor can clear the bar on high-level requirements. They can all handle load planning. They all support carrier tendering. They all manage inbound ASNs. The question is: at what level of detail, and under what real-world conditions?
Tony shared a real example that illustrates the gap. During a vendor portal implementation for an inbound TMS, the client had very large purchase orders—hundreds of line items per PO. Everything looked fine in the demo, but when testing began with actual data, the portal couldn't even render the PO because of the volume. The feature existed. It just didn't work at the scale the client needed.
That's not a vendor failure.
That's a requirements failure.
If the evaluation never surfaced the PO size as a constraint, no vendor would have known to test for it. The detailed requirement "vendor portal must support POs with X line items" is the only thing that would have caught it.
Brian added a parallel example from the WMS side: a company that had scripted demos, mapped out EDI, validated ASN flows at pallet and case level, and done everything right only to discover that an external vendor—a key supplier—was willing to use one portal, but not two.
The technology worked. The people alignment hadn't happened. That's a requirement too, and it belongs in the process just as much as any system specification.
What "Detailed Requirements" Actually Means
- Operational specifics — Not "supports load planning" but "supports multi-stop truckload with dynamic updates after carrier assignment"
- Data volume and scale — What happens at the maximum size of your real POs, shipments, or SKU counts?
- External stakeholder flows — Which vendors, carriers, or partners will touch the system, and what are they actually willing to do?
- Integration depth — Not "WMS and TMS are integrated" but exactly what integrations exist, and which fields pass in which direction, at what trigger events?
- Exception handling — What happens when a truck is late, a PO changes after carrier tender acceptance, or a carrier misses a pickup?
"Integrated" Is Not a Binary Condition
One of the most persistent misconceptions in supply chain software buying is treating integration as a yes/no question. Vendors routinely say their TMS and WMS are integrated. While that may be technically true, the level of that integration varies enormously.
Tony described a flow-loading implementation for a large retail client using Manhattan PKMS and an earlier version of Manhattan TP&E. The operation required building a shipment in the TMS early enough to tender to carriers—before knowing exactly what product would be on it. Once the DC closed the load, the TMS needed an immediate update with the actual contents. That two-way, event-triggered communication between systems had to be built. It wasn't there out of the box, even with two products from the same vendor.
Brian described an appliance manufacturer that specifically chose a TMS from the same vendor as their WMS because they were told the products were integrated. When the implementation team got into the details, they discovered the vendor had built an integration between their WMS and an alternate vendor’s TMS, but had never actually integrated their own two products at the level the client needed.
The question to ask isn't "Are these products integrated?"
It's “What specific data passes between them, in which direction, triggered by which events, and at what frequency?”
Shipment Visibility and Labor Planning Are Directly Connected
The integration between TMS and WMS drives workforce decisions in real time. If a carrier is running an hour late and the DC doesn't know until the truck was supposed to arrive, that's an hour of labor standing around. If a live outbound load is delayed and the dock team is idle, the cost shows up in overtime, inefficiency, and carrier relationships.
Tony described the practical implication: a DC manager who knows a truck is late can pull a drop trailer from the yard and keep the team productive instead of waiting. That decision is only possible with visibility into inbound shipment status, which requires a live connection to the carrier, whether through EDI, RTTV, or direct carrier communication, flowing into the WMS in a usable way.
The same logic applies outbound, and it extends to rescheduling. Brian noted that organizations need the ability to push updates back to carriers as conditions change—it has to be a two-way flow, not just a data feed coming in.
What the Major Vendors Are Actually Building
The clearest shift in the market right now centers on three closely related capabilities: unified platforms, tighter integration between transportation and warehouse systems, and greater software extensibility for customers.
Leading vendors are moving toward unified platforms where TMS and WMS operate on a shared data model, allowing changes to orders, shipments, and execution events in one system to immediately propagate across the platform without duplicate updates or synchronization issues.
For organizations that continue to run separate systems, vendors are prioritizing much tighter native integrations so transportation planning, warehouse execution, and inventory movements can operate as part of a coordinated workflow rather than disconnected processes.
At the same time, modern platforms are being designed with far greater extensibility, enabling customers to build their own workflows, applications, and automation directly on top of the core platform without breaking upgrade paths or relying on fragile custom integrations. Together, these shifts are redefining how transportation and warehouse systems function within the broader supply chain technology stack.
Manhattan Associates is a notable example. Their unified platform shares a common data model across TMS and WMS, built on a microservices architecture that gives customers more flexibility to extend without waiting for the vendor to deliver modifications. Tony noted that early implementations are underway, though most JBF clients are still running one or the other, not both in production together yet, but Manhattan does have clients live on both TM & WM unified platform.
Oracle is pushing tighter integration between Oracle Transportation Management and its newer Oracle Warehouse Management platform, pulling warehouse execution closer to both the ERP and transportation layers. Blue Yonder is moving in a similar direction following its acquisition of One Network Enterprises, aiming to unify planning, execution, and network collaboration within a broader platform. However, as Tony noted, integrating capabilities across acquired products is rarely immediate. Consolidating data models, workflows, and user experiences across historically separate systems often takes several product cycles before the vision of a truly unified platform is realized.
Brian pointed to the historical parallel: HighJump (now Korber) built its original market position on what they called "the power of one"—a single technology stack spanning WMS, labor management, and yard management. That concept is returning, but on modern microservices foundations rather than monolithic architectures.
Softeon represents a different approach: more monolithic in product structure, but built with microservices-based integration tooling that gives customers flexibility at the connection layer.
The practical implication for buyers: understand exactly which version of a platform you're being shown. Vendors growing through acquisition often have multiple products under the same brand name, and the integration depth varies significantly across them. When Blue Yonder pitches One Network alongside their legacy TMS, ask which platform each capability actually lives on and what the roadmap looks like for consolidation.
Run Scripted Demos with Your Own Data
Vendor demos are not neutral territory. Without specific guidance, vendors will show you their strengths—the flows they've optimized, the scenarios that look best in their UI, the features that close deals. That's not deception; it's how sales works. Your job is to change the terms.
Tony recommends requiring vendors to build demos scripted by the client, using the client's actual data, walking through a day-in-the-life of their specific operation. Most vendors don't love this approach. It's more work, it exposes gaps, and it moves the evaluation from a product showcase to a capability validation. But there’s no other way to know for certain if a vendor is truly a match for your organization’s unique needs and requirements.
Brian added a critical caution about RFI responses: vendors have been known to answer requirements vaguely, attributing capabilities to "the WMS" when they mean two different products within their portfolio, neither of which individually does what the requirement asks. Reviewing RFI responses at that level of precision is tedious, but it's also the only way to catch it.
The Gartner Quadrant Is a Starting Point, Not a Selection Tool
Leaders in the top right of analyst rankings got there by being broadly capable across many customer types and industries. That's not the same as being the right fit for your specific operation. "Please do not assume the guy in the top right is going to meet your unique business needs—because it's not going to happen," Tony said.
There are tier-two vendors in the market that will cover every one of your actual requirements while a tier-one platform leaves gaps in the things that matter most to you. Brian used the phrase "secret sauce"—the things an organization does that are genuinely unique and non-negotiable. Those requirements deserve their own category in your evaluation. A vendor in the top right quadrant might miss all of them. A vendor outside it might nail them.
Use analyst rankings to build a long list. Then do your own work.
The Go-Live Gap
The final point Tony and Brian raised is one that comes after selection, but shapes expectations during it. Going live and functioning operationally are not the same thing.
Software can be deployed, technically live, and still fail to deliver value. Tony called this "the go-live gap": the distance between the system being switched on and the organization actually using it effectively. The gap isn't always the software's fault. It's often a change management issue, a process gap, or an adoption problem that wasn't adequately addressed during implementation.
Conclusion
For organizations that already own TMS or WMS software and are struggling: before concluding the product is wrong, make sure you've diagnosed correctly. A missing requirement, a process that never changed, or an integration that was scoped out and never revisited can create symptoms that look like software failure. It's worth asking those questions before making another major investment.
Transportation management systems have become foundational infrastructure for modern supply chains.
Selecting and implementing the right platform is one of the most consequential technology decisions a logistics leader will make.
Our TMS Buyer's Guide provides a structured framework to help logistics leaders approach transportation technology decisions with clarity and confidence.
About the Author
Tony Wayda is an Engagement Principal at JBF Consulting with more than 30 years of experience in transportation and supply chain systems assessment, selection, design, and implementation. He has led global transformation programs for leading brands across retail, apparel, manufacturing, CPG, and 3PL industries. Tony has deep expertise with TMS, routing, scheduling, WMS, and visibility platforms, including Manhattan, Descartes, Blue Yonder, Oracle, and E2Open. Known for bridging technology and operations, he partners with shippers to develop strategic roadmaps, lead solution design, and enable long-term value realization.
If you're evaluating TMS or WMS software, or if you've already implemented something that isn't working the way you expected, contact us today. Whether it's identifying requirements, diagnosing a go-live gap, or defining your strategy, we’re here to help.
FAQs
Before evaluating any software, companies should define a clear supply chain strategy and develop detailed, operational-level requirements — not high-level bullet points. This means documenting specific workflows, data volumes, exception scenarios, and external stakeholder dependencies (carriers, vendors, partners). Skipping this pre-work is the single most common reason organizations select the wrong product and end up with costly workarounds after go-live.
Don't accept "yes, they're integrated" as an answer. Ask vendors to specify exactly which data fields pass between the systems, in which direction, triggered by which events, and at what frequency. Request that they demonstrate this integration using your actual data in a scripted demo. Many vendors have built tighter integrations with third-party products than with their own platforms — the only way to confirm the depth is to test it against your specific operational requirements.
A scripted demo is a structured vendor demonstration built around a buyer's own data and workflows — typically a "day in the life" of their operation, including common exceptions. Unlike standard vendor demos that showcase strengths, scripted demos expose gaps in areas that matter most to the buyer. They allow organizations to score vendors on consistent criteria and make a data-driven decision rather than one based on polished sales presentations.
Many implementations fail not because of the software itself, but due to a "go-live gap" — the distance between a system being technically live and the organization actually using it effectively. Root causes include incomplete requirements that were never captured, internal processes that weren't redesigned to match the new system, poor change management and user adoption, and integrations that were scoped out during implementation and never revisited. Before concluding a product is wrong, it's worth diagnosing whether the issue is process, people, or a missing configuration.
Not necessarily. While same-vendor platforms theoretically share a common data model, real-world implementations have revealed cases where vendors have more mature integrations with third-party systems than with their own products. The right question is not whether the products share a logo, but what version of each platform is being sold, how deep the integration actually is today, and what the roadmap looks like for consolidation. Always validate integration claims independently and test them with your data before committing.
