Artificial intelligence is no longer a future-state discussion in supply chain and logistics. It’s already embedded in daily workflows, quietly reshaping how work gets done. But while the technology is advancing quickly, the more meaningful shift is happening elsewhere: in how professionals think, learn, and make decisions.
In a recent webinar, Rachelle Yeingst, Strategy Director at JBF Consulting, and Thomas Deakins, EVP of Alliances at Redwood Logistics, discussed how AI is actually being taught, applied, and evaluated across supply chain education, teams, and technology.
At its core, the conversation around AI in supply chain isn’t really about automation. It’s about judgment.
As Thomas put it, “You have to become the expert of your domain. And the reason you have to do that is because once you go into your role… you have to start saying, okay, where are the processes that might be manual today or something I could do better using AI?”
Contrary to popular belief, AI doesn’t eliminate the need for expertise. In fact, the better you understand your data, your processes, and your constraints, the more value you can extract from intelligent tools.
Watch their conversation here or keep reading for a summary.
From Data Entry to Decision Ownership
Entry-level supply chain roles used to be defined by manual tasks: appointment scheduling, carrier calls, data entry. Many of those functions are now embedded directly into transportation, warehouse, and planning systems (and often enhanced by AI).
What’s replacing those tasks isn’t “less work,” but different work.
AI can surface anomalies, predict disruptions, and suggest actions, but it cannot decide whether those actions make sense in context. That responsibility still sits squarely with the human in the loop.
Rachelle framed the shift succinctly: today’s professionals “have all of these tools available to them, but that doesn't replace the need to critically think and interpret.”
The responsibility is no longer just to execute steps, but to understand why outcomes look the way they do and what to do next.
Bad Data Still Breaks Good Technology
For all the excitement around generative and agentic AI, one truth hasn’t changed in decades: bad data produces bad outcomes.
Thomas emphasized, “AI is not going to be any good if it’s working with bad data. You’re going to have a bad outcome.” Whether loading rate tables into a TMS or training an AI model, the fundamentals remain the same. Data accuracy, structure, and governance determine success far more than the sophistication of the algorithm.
This is why teaching the “hard way” still matters. Tomas explained, “I’m teaching the foundations of data. If you don’t know what you’re cleansing, it’s not going to be very valuable.”
In practice, AI can absolutely support data cleansing and anomaly detection, but without domain knowledge, users can’t distinguish between a meaningful exception and a harmless outlier. Technology may flag issues faster, but humans still decide what matters.
Prompting Is a Skill
One of the most overlooked competencies emerging alongside AI is prompt engineering. Many users interact with generative AI the same way they use a search engine - short queries and little context.
That approach limits value.
“If I type something into Google and I’m asking it a question, that’s usually how people are using AI,” Thomas noted. “But if we really understand how to use GenAI from a prompt engineering standpoint, then we can truly get more out of the data.”
Rachelle shared that effective prompting mirrors how professionals already think: providing context, defining desired outcomes, and iterating on responses. “Even when I’ve prompted the AI, I’m still going back and forth with it… making sure that it’s an output that I can actually articulate and retain.”
AI isn’t a one-and-done tool. It’s conversational. The value compounds when users treat it like a collaborator rather than a vending machine for answers.
Strategy Before Labels
Too often, companies start with the question, What is our AI strategy? rather than What problem are we trying to solve? Rachelle challenged that framing directly: “Why are we starting with a label? What are the challenges that we’re looking to solve right now? If it’s solved with AI, that’s a label the tech provider should give you.”
This distinction matters because hype cycles can derail otherwise sound initiatives. As Thomas pointed out, studies show that “90% of all AI projects fail,” often because outcomes aren’t clearly defined, resources aren’t aligned, or the necessary data and talent aren’t in place.
Whether AI adoption is driven top-down, bottom-up, or through a hybrid approach, success depends on clarity. What process is being improved? How will success be measured? And do users have the expertise to evaluate the system’s recommendations?
Without those answers, even advanced tools become expensive experiments.
The Throughline: Fundamentals Still Win
Despite all the technological change, the closing takeaway was surprisingly familiar. The fundamentals that mattered twenty years ago still matter today: clean data, clear processes, strong connectivity, and measurable outcomes.
AI doesn’t change those requirements. It makes ignoring them more expensive.
As Rachelle summed it up, “What are the challenges we need to solve? What is the value you would garner from this investment? And is it worth it? The label doesn’t matter.”
JBF Consulting helps shippers unlock cost savings, improve visibility, and build scalable logistics technology strategies. Contact us today to learn how our proven approach can deliver measurable benefits for your organization.
About the Author
Rachelle Yeingst is a Director, Strategy at JBF Consulting with more than 15 years of experience spanning logistics operations and technology. She partners with shippers to assess, design, and implement solutions that align operational needs with long-term business direction.
Rachelle’s background includes roles in product leadership, consulting, implementation, and post-deployment client success at e2open, BluJay Solutions, and LeanLogistics. She began her career in the United States Marine Corps, where she gained foundational experience in transportation coordination and logistics operations. Rachelle brings a practical, real-work approach to helping clients realize meaningful value from their operational investments.
FAQs
Artificial intelligence is already embedded in daily supply chain workflows, supporting tasks like anomaly detection, disruption prediction, appointment scheduling, and data analysis. Rather than replacing professionals, AI shifts their role from manual execution to decision ownership—helping teams interpret insights and decide what actions make sense in context.
No. AI increases the need for human expertise. The more professionals understand their data, processes, and constraints, the more value they can extract from AI tools. AI can suggest actions, but humans are responsible for judging whether those recommendations are appropriate and effective.
AI systems depend entirely on the data they are given. Poor data quality leads to inaccurate insights and bad outcomes, regardless of how advanced the technology is. Clean, well-structured, and governed data remains the foundation for successful AI adoption in supply chain and logistics.
Prompt engineering is the skill of providing AI systems with clear context, goals, and constraints to get more useful outputs. Treating AI like a collaborative, conversational tool—rather than a search engine—allows supply chain professionals to generate more accurate insights, refine recommendations, and retain decision-making control.
Most AI projects fail because companies start with the technology label instead of a clearly defined problem. Successful AI initiatives focus first on the business challenge, desired outcomes, available data, and user expertise. Without that clarity, even advanced AI tools become costly experiments with little measurable value.