
Most organizations have a well-worn playbook or access to the right resources for selecting enterprise software. If you’ve selected any sort of ERP, CRM, HCM platform, or similar, it’s a familiar rhythm: define requirements, compare vendors, select the “best fit,” then implement with a structured plan and a clear target future state.
AI tool selection looks similar from a distance, and there are some similarities. But it’s fundamentally different in practice. In traditional software selection, the assumption is that requirements are knowable and largely stable. In AI selection, that assumption breaks down almost immediately. Organizations that try to apply ERP-style selection rigor to AI tools end up creating false certainty and under-testing real-world performance.
A better approach recognizes that AI selection isn’t about feature comparison. It’s not as concrete. There are a few key differences in each stage of the selection process to consider before treating an AI selection like any other traditional tool selection.
Before anything else, it’s crucial to determine the “why” behind any new technology solution.
Traditional Technology Selection: With traditional tools, like an ERP or CRM, there’s an emphasis on defining a clear, stable problem and a desired future state. Requirements are assumed to be identifiable and fixed. There’s not much ambiguity. Stakeholders want certainty before they commit to a platform. It’s common to see detailed workflow requirements, scoring models, design expectations, and more developed early. The goal is alignment and confidence before moving forward with tool evaluation.
AI Technology Selection: AI selection starts in a different place. Instead of trying to lock in requirements too early, the emphasis shifts to understanding of the problem space. When picking an AI tool, you can’t start by writing a long, fixed list of requirements (“must have X feature, must support Y workflow…”). Instead, start by clarifying three things that matter way more for AI than for traditional software:
What Changes in the Selection Approach
Selection shifts from finalizing requirements to framing the problem and success criteria. With AI, ambiguity is accepted early to avoid premature sense of precision.
In ERP selection, a requirement might be, “The system must support X workflow.” In AI selection, a more realistic goal is, “The model should improve forecast accuracy by 10–15% under variable conditions.” That difference matters. The ERP statement implies a deterministic system: if the software supports the workflow, it meets the requirement. The AI statement implies probabilistic performance: whether the tool works depends on data, context, and real operating conditions.
This is where the software enters the conversation.
Traditional Technology Selection: Traditional tools are evaluation through documentation, demos, and vendor claims, and scoring models. Capabilities are assumed to be consistent across environments (i.e. If a vendor can demonstrate it, the organization assumes it can replicate it). Selection is largely completed before build begins, and the selection decision often feels like a point-in-time conclusion.
AI Technology Selection: With AI tools, stated capabilities don’t reliably predict real-world performance. Tools are more likely to be evaluated with pilots and proofs of concept using representative data. AI tools behave differently depending on the data they receive, the operational conditions they run under, and the level of tuning required to maintain performance. Selecting an AI tool isn't usually one-and-done.
What Changes in the Selection Approach
Selection shifts from comparison and ranking to experimentation and evidence. In ERP selection, for example, Vendor A might score higher due to broader feature set. In AI implementation, Model B might be selected because it performs best on actual client data despite fewer “features.”
This is the path forward after a tool is selected to put into action.
Traditional Technology Selection: Once a platform is selected, the roadmap becomes execution-driven. The focus is sequencing work, controlling scope, managing cost, and hitting implementation milestones. Value realization is assumed to follow implementation, with change occurring in episodic bursts (phase 1 go-live, phase 2 rollout, etc.). It’s all about planning the execution as seamlessly as possible.
AI Technology Selection: AI adoption requires a roadmap built around learning and scale rather than execution alone. The path typically involves a pilot, making refinements, expanding use cases, and establishing policies and governance. Progress is not a straight line, and value realization tends to be incremental. The “best” use case often changes over time as real performance is observed.
What Changes in the Selection Approach
Selection shifts from deploy once and optimize to deploy, learn, adapt, and scale. A single go-live with a phased rollout works with traditional solutions, but AI requires a more iterative approach.
Traditional technology selection is like buying equipment. Once it’s installed, it’s expected to behave the same way every day. If something goes wrong, it’s usually a workflow issue, a configuration issue, or a training issue, not the tool itself changing.
AI behaves more like a new employee than a piece of software. It makes judgment calls. It’s influenced by the quality of information you give it. It will perform well in some situations and strangely in others. And just like a human, it needs oversight, feedback, and guardrails, especially early on. You’re not just choosing a platform. You’re choosing where you’ll allow it to influence outcomes and what kind of supervision it requires.
At Trenegy, we help organizations develop an AI strategy and roadmap that aligns with business needs and long-term goals. To chat more about this, email info@trenegy.com.