It has become notoriously difficult to replicate experiments in the AI field. Generally speaking, getting neuronal networks to perform well can be like an art and tweaks to the network often go unreported in publications. Also, as networks grow in size and complexity, studying those models is expensive, if not impossible for all but the best-funded labs. Not to mention the huge data sets and massive computing power that are involved in creating the networks. For example, Facebook’s AI team struggled to replicate AlphaGo. For Anna Rogers, a machine-learning researcher at the University of Massachusetts this begs the question:
Is that even research anymore?. It’s not clear if you’re demonstrating the superiority of your model or your budget.