From research and prototyping to commercial development, AI/ML requires making decisions and trade-off's. This is where the PM needs to provide loads of context to the R&D team: what are the customer problem(s) are that we are trying to solve? What parameters and trade-off's would be acceptable to a customer? Have we considered the cost trade-offs of operating in practice?
From a data scientist's point of view, whether a model is sufficient to support the business case might be a higher bar than whether the model is a good model. Clear business strategy and customer context upfront can help guide the direction to take the development (kill the research and start fresh, re-focus it, etc.)
Test environment vs. Real-world environment
Skimping on QA strategy, test environments and real-world testing are classic software development pitfalls. With AI/ML programs, again this problem is amplified. The biggest challenges in commercial success lie in the substantial delta between the test environment and test data, and the highly variable nature of the field. Pitfalls include:
Careful modelling of the different data sources and their attributes - features, availability, timing, anticipated errors and drift - can lay the foundation for the right QA strategy what to test, where to test, and how to judge success.
Give the team the big picture
In AI/ML programs, once production starts, research usually continues in parallel. Managing both as one cohesive team is a new challenge for traditional program managers.
From a process perspective, sometimes the method of communicating the requirements from research to production is by simply giving them the researcher’s Jupyter notebook, or a set of Python or R scripts. If the prod team redevelops and optimizes the code for production while the research team continues from their base notebook, you have the problem of versioning the code and identifying changes.
From a human perspective, it can be easy to assume that because the individual team members are often highly educated and experienced, especially data scientists who may have a PhD. Nevertheless these are still just people, and people see the world through their own lens until the manager gives them the big picture.
To ensure a well-oiled research and production machine:
Back in the 1970s and 80s, software development was a sort of "black magic", where both benefits and risks could take you by surprise, forcing management to plan carefully. We have since made leaps and bounds in the maturity and predictability of our software engineering practices, to the point that we take for granted that anything can be engineered given enough time and budget. In a sense, AI takes us back to those early days where anything was possible, but nothing was to be taken for granted. The risks are high, but in a way. that's a good thing - it should increase the maturity of our management processes and responsibility.