To paraphrase some of my favorite dialog from the West Wing, I make it a point never to disagree with Ralph Losey, when he’s right. In his recent “Predictive Coding 3.0” blog posts, Ralph is right about a great many things: Using control sets to validate the completeness of predictive coding results is problematic. At the beginning of discovery, one’s case knowledge is limited, so any initial control set is based on ill-informed review. So-called “stabilization” models do end the system’s training prematurely based on a poor standard of quality. What is inaccurate in these posts, however, is the characterization of Recommind’s machine learning approach or even that what Ralph refers to as “Predictive Coding 3.0” is new. So, with all respect to Ralph, let’s set the record straight. Recommind’s Axcelerate platform was architected, from the very beginning, to be both flexible and to incorporate reviewer feedback interactively. It leverages advanced, proprietary technology that a lot of people now call continuous learning. The continuous part means that machine learning is integrated into the review process itself. Axcelerate easily adapts to the different shape of those review processes and has from the outset. Product architecture aside, the workflows we have suggested for many years are both flexible and data driven. From the first moment our CTO and I sat down with early eDiscovery thought leaders to get input into what problems eDiscovery practitioners were facing, flexibility and the changing shape of cases were key criteria incorporated into our thinking. We agree with Ralph’s outline of an interactive model for working with data, but think about it more simply as consisting of three basic steps:
This approach, paired with our continuous learning technology, yields numerous benefits:
Given that our technology can be leveraged in a variety of different workflows, why have we been recommending this approach for some time? And why have we embedded these concepts directly into Axcelerate 5’s interactive review dashboards? Because using machine learning as part of a flexible, prioritized review strategy adds value to virtually every review project. And such an approach avoids the rigid protocols that can lead to protracted motion practice and disputes over validation methodologies. So that’s the straight record on Recommind and Predictive Coding. In many ways, however, the discussion around Predictive Coding version numbers is too narrow in scope. Machine learning is, after all, just one part of an efficient review strategy. And review efficiency is ultimately about spending more time with relevant content and less with the irrelevant. This is how you can quickly find the documents that make a difference in your case. In the coming weeks, you’ll hear more from Recommind about new ways we’re enabling our clients to visualize review efficiency. Because knowing what efficiency levels you’re achieving, and which strategies are yielding the best results, is crucial to repeating your success. Come try us in 2016. Let us help you understand how Axcelerate can help you succeed. As for “Predictive Coding 3.0?” At the end of the day, it’s not the version number that matters. It’s about finding the documents that matter.