My, What a Difference a Year Makes

Disclaimer (A bit tongue and cheek, but I do think this is necessary as there are some critiques below):  The views and opinions expressed herein are solely those of the author and should not be attributed to Counsel On Call, Inc., or any of its officers, attorneys or employees.


I’ve had the good fortune to attend the past two annual conferences for the Association of Certified E-Discovery Specialists (ACEDS) in Hollywood, Fla., held at the superb Westin Diplomat.  In comparing the two conferences, all I can say is, “What a difference a year makes.”

First, and purely incidentally, the weather in 2012 was sunny, warm and generally quite pleasant. This year, the weather was overcast, rainy and a bit cooler. 

Just as the weather cooled a bit I think I detected a slight “cooling” of the conference attendees’ collective enchantment with the so called predictive coding technology.  In 2012, predictive coding was going to cause an industry and professional upheaval, eliminating the need for discovery (contract) attorneys, cutting costs, improving accuracy and possibly shifting influence between different stakeholders in this area.  One year later, we have experience  – more reported court decisions directly on point and more vendor entries into the marketplace.  With this collective experience, the conference attendees had a cooler, more nuanced view of the technology.

Please don’t misunderstand, there’s no question that predictive coding (also known somewhat synonymously as technology-assisted review or simply “TAR”) is here to stay and should only improve with time.  Rather, the bloom has worn off, and practitioners are discovering that, although at times and for certain types of matters, TAR improves efficiency, overall quality of a review and can significantly lower overall costs. Nevertheless, it is neither a cure-all nor the disruptive technology that some claimed last year.

I think there are several reasons for this maturing of the collective view:

  • The term predictive coding (trademark issues aside) seems to mean different things to different people, hence the use of the more generic TAR designation. This causes confusion among potential customers.
  • Some technology vendors may have rebranded older technology as TAR, perhaps thereby lessening the user experience.
  • Different TAR tools have different “blind spots” that limit their utility, e.g., image files and spreadsheets may not be considered by the analytics.
  • Far from removing human judgment from the process, TAR applications may increase dependence on human judgment.  For example, mistakes by “subject matter experts” can be amplified.  Alternately, I suggest trying your hand at picking “exemplar” documents to teach the computer – a document might be technically non-responsive to the litigation but would nevertheless have excellent teaching parameters.
  • TAR itself is not inexpensive.
  • TAR reduces data but does not eliminate the need for some linear review, either with quality control or during the construction of a privilege log.
  • Not all matters are suitable for TAR, ether due to the size of the case or the type of data.

Counsel On Call has always been a solid proponent of predictive coding as well as an early adopter.  Vendors now, however, call their technology predictive coding without the functionality.  There is no question that predictive coding is here to stay; rather, its potential is still less than hyped.

I’m looking forward to attending next year’s ACEDS Conference.  While not perfect (panels are too large and too much time is spent on speaker introductions), it’s the only conference of which I’m aware that is focused on the e-discovery practitioner.  ACEDS also seeks to professionalize this field, and this is a good thing.

Maybe next year the weather will be warmer.  It will be interesting to take the attendees’ temperature on TAR as well.

 

An Aerial View of the Association of Certified E-Discovery Specialists

Follow Barry on Twitter (@barrywillms)


The third annual Association of Certified E-Discovery Specialists (ACEDS) Conference was held again this year at the Westin Diplomat in Hollywood, Fla. We had great lodgings for sure, but they did not order the warm weather in so-called sunny Florida. Next year’s conference will be moved to May in order to compensate for this unruly weather. I guess I can’t complain too much; there are colder places in February/March, especially this year.

The gathering seemed to be a bit smaller than last year, but it was a really good group of professionals. There were several good sessions in addition to lots of opportunities to mingle and meet everyone. The information presented focused on a number of areas but a lot of them could be labeled within technology assisted review (TAR), social media and various ‘best practices’ within the industry.

It seems everyone is starting to dabble in TAR by various names (computer-assisted, technology-assisted, predictive coding, etc.). Much of the discussion went beyond simply being comfortable with the subject matter but included discussions on how to properly validate the process, workflow and output to make sure to achieve your goals and benchmarks.

The use of social media in litigation has not become as big as it was originally projected to be in 2013. However, its presence in cases continues to grow. Tweets, Facebook pages and many other networks are more routinely being collected and produced in litigation than ever before. We can only imagine that this will increase over time. We were told, for example, that instant messaging is the norm for business communications in some Asian countries instead of email. It’s certainly something we’ve been anticipating for a couple of years here in the states and that our technology partners are well prepared for.

The ‘best practices’ within the industry sessions included the following: dealing with data privacy issues of the EU, preventing malpractice or having ethical issues overtake you, and following a process to meet your budget, review and production objectives. There it is again: Success always comes from having a process and following it. As we always tell clients, it’s the project manager’s responsibility to ensure there is a process that is documented, defensible and ultimately repeatable in a future matter. Here are a few quick hits on the good, the ‘OK’ and the bad of the conference:

The good: It was a gathering of practitioners of e-discovery, folks who actually do this day in and day out. Lawyers, consultants, paralegals, IT professionals and technology vendors provided a good mix. It was refreshing to hear war stories from those who deal in process and who want to perfect the best practices of a growing industry. While the conference overdid the ‘experts’ language a bit, it really was a good group of professionals who work exclusively in this industry that had a lot to share on how best to accomplish goals. In the end, process always wins out. It’s best for clients, budgets, meeting deadlines and your own sanity.

The ‘OK’: While the topics were timely, the presentations this year seemed a bit elementary. There were too many presenters on each panel and not enough variety of speakers from one panel to the next (seemingly lots of folks did multiple panels). Variety is good for the soul. I would encourage the ACEDS team to expand the speaker selection and let each panel have a bit more time to develop its topics and provide more time for Q&A.

The bad: My constant pet peeve: too much time on introductions. For example, the first session didn’t start on time and resulted in the panelists not being able to talk until we were more than 35 minutes into the program. This limited the Q&A time which is often a very helpful part of the conversation. Then again, I’ve been to conferences where this would have been a good thing!

The moral of the story is that it wasn’t perfect. But what conference ever is? I appreciate ACEDS’ attempt at bringing together the best of breed within e-discovery people who are well versed in this field. My philosophy is the more we focus on best practices, the more clients will rely on us to help achieve their goals. All in all, it was a good event filled with useful information and solid connections with other e-discovery specialists.

 

What We Learned: Legal Tech NY 2013

Last week, I made my annual pilgrimage to the Big Apple to wade among thousands of legal industry professionals, the majority of whom are involved in some phase of the discovery process. And as is normally the case, the three-day event became a blur.

There are just too many people to see, too many technology platforms to demo, too many sessions to attend. However, Legal Tech does provide that one time of year to focus on the wide array of technology and ancillary service offerings that are integral to our profession. Moreover, it provides a great opportunity to keep abreast of national and global trends in technology and its application to the legal practice.

This year was particularly fruitful, so I’ve come back with 10 observations that are related to the issue du jour: Technology-Assisted Review (TAR). I will delve into more detail in multiple blog posts over the next few weeks, but for now, here are my thoughts and summary on a panel that took on some of the bigger issues.

Part 1

TAR really took center stage at this year’s Legal Tech.  Unlike last year’s treatment of predictive coding technologies (now generically referred to as TAR, correctly or incorrectly), where the discussions largely focused on the uncertainties of computer review and “black box” technology, it’s clear that 2012 was a year in which TAR in all its different varieties was embraced by legal practitioners. This made the panels much less theoretical and much more practical.

Indeed, the panel entitled “Case Studies and Lessons Learned from the Practical Use of Technology-Assisted Review" offered a guided tour of how each of the four panelists uses TAR in his or her areas of specialty.

10 observations from the panel discussions

  1. There is no one-size fits all technology or methodology. Sorry, there is no ‘easy’ button. While all panelists had regularly used TAR, none of the panelists used the same technology or even the same approach to using the technologies. However, the panelists were consistent in that they each had clearly defined processes that they followed for each matter. We wholeheartedly agree that there are many tools with different features and benefits and the key to successfully and defensibly utilizing technology is to have customized processes.

  2. You can’t separate the law from technology. As technology continues to advance at warp speed, there’s still no substitute for “good lawyering.” To effectively use and defend the use of TAR, the attorney should follow the same principles that are part of any successful legal strategy. The first is to talk to your client. It is imperative that the attorney spend time with the client and ask questions that are designed to lead to identification of relevant information and examples of documents that can be used in creating seed sets. Ask for acronyms, where they store data and who they communicated with on other side to develop your collection and review strategy.

  3. There is no case that is too small. The potential benefits of using TAR for large data populations are generally well accepted. However, for smaller data populations the panelists agreed that TAR is still helpful in cases with as few as 15,000 documents. In fact, the obstacle in using the technology on cases under 15,000 documents is not that the technology can’t assist in the review; rather, that if doing so required that you seek out a new technology vendor with TAR technology, it may not be worth the additional time and effort. Conversely, if you have a relationship with a technology vendor and have processes built around the applicable TAR technology, the use of TAR is especially helpful to resolve smaller disputes as the costs can be reduced dramatically and the risks of using the technology are much lower.

  4. You do not have to be a technology expert to use TAR. The panelists were asked how many times they had to explain the mathematics behind the algorithms used to train a predictive coding tool. All but one – who happened to have developed her own proprietary TAR module and explained for other purposes – had never been asked to explain the underlying technology that was used in the review.

  5. Process, process, process. Create a process. Document your process. You should be able to clearly present the steps taken to identify responsive documents and that process should establish good faith and reasonableness. The processes described to “train” the computer in TAR methods differed from panelist to panelist, but each described their methodical process in which they followed to (1) create the “seed set” and (2) to validate the results.

  6. Utilize the entire tool box to create “seed set” to train the computer. This gets into the weeds a little, and I plan to post separately on this vital aspect of predictive coding, but the crux of the matter is that key terms and concept clustering are still used in many TAR platforms.

    Key terms as a method to create seed sets. One panelist “uses key terms for inclusion but never for exclusion.” So while she will populate a seed set with key term hits, she will not exclude those documents from the opportunity to be brought into the seed set using different methodologies (i.e. random sampling, concept clusters, etc., might be examples).

    Clustering as a method to create seed sets. One advantage of the clustering approach is that you are not limiting the scope of universe by using key terms, which are typically inadequate if the only methodology for identifying responsive docs employed.

    Note: In our experience, we have found that all concept/content clustering technology is not alike. In fact, some are virtually useless based upon the methodology used to create the cluster. Many programs “cluster” documents that seemingly have no substantive relation to one other and certainly not enough reliability to create “seed sets” for TAR. On the other hand, concept clusters that narrowly define the size of the cluster to only contain documents that are highly similar can be very helpful in creating useful seed sets and eliminating documents that have no value to the case or training. With the right concept clustering technology, sampling the documents in a “cluster” is similar to the old practice of going through a warehouse of banker’s boxes full of documents, which would entail looking at the outside of box for a label (or any available indices of boxes), opening them up and sampling the documents. Very quickly by viewing the folder names and glancing through the documents, the reviewer could make a reasonable determination as to the contents of that box and reasonably determine if box should be “in” or “out.” There would not be a need to look at every document to make this determination.

  7. There is no magic number with respect to how many documents should be reviewed to “train” the computer. The key is not the number but the richness/representative nature of the seed set. The goal of creating any seed set is to find as many representative documents in that population to allow the computer to apply analytics. This is often not all done at the outset, but rather it’s an iterative process in which you continue to “train” the computer as you find more and more representative documents (e.g. “active learning”).

  8. Human reviewers are critical to the TAR process. This is the case for two main reasons:

    • Training a predictive coding tool requires attorneys with significant experience (preferably litigation) and knowledge of the client, case and substance, as the decisions that are made to train the tool have much larger impact than an individual reviewer on an individual document.

    • Reviewing the documents predicted as “responsive.” The only unanimous point of agreement of the panel was that once the predictive coding technology identified the likely responsive documents, a 100% review, document by document, is recommended of documents that would be produced. Two primary reasons for the need to review the predicted relevant documents (1) privilege and (2) knowledge of your production. The panelists agreed that, to date, TAR technologies have not been as successful in identifying responsive PRIVILEGED documents; therefore, it is an important function for a human reviewer to carry out. All agreed that when you are producing documents, the attorney should be aware of documents being turned over. The first time they see a document should not be during depositions of their clients.

      That being said, there were a few situations noted that might warrant less than a 100% review of the predicted responsive set and instead utilize sampling of proposed results: second-request situations and third-party subpoenas.
  9. Effective utilization of TAR saves significant time and money, and is defensible. One of panelists explained he had a case in which he had performed in linear fashion originally, using 20 to 30 attorneys over a six-month period. By circumstance, several years later the court ordered a re-review of the data for different objectives. By using TAR, it took one attorney one-and-a-half weeks to complete the work of five associates. Depending on the tool selected and the methodology deployed, TAR has tremendous opportunity to cull through a lot of non-relevant materials and to eliminate much of the attorney review time otherwise spent on sorting through the mountains of non-responsive documents typically found in any given case (usually only 10% or less of documents are responsive in a document review). By utilizing TAR, it is possible to increase the responsive rate of any review to 50% or above, which permits the attorney reviewers to perform more in-depth and substantive analysis without wasting time and money reviewing spam or other clearly non-relevant material.

  10. Validate your results. Do your own validation/null set sampling. Be prepared to show a reasonable process was undertaken to identify documents not reviewed on a document by document basis. This is no different than any other data reduction methodology (i.e. like key term development, sampling, testing and refinement), but always a crucial step in tying up the loose ends of your process.

I’ll have follow up blogs of my LTNY series posted here in the upcoming weeks.

Legal Tech NY 2013 Panel
"Case Studies and Lessons Learned from the Practical Use of Technology-Assisted Review"

Panelists
Thomas Lidbury, partner, Drinker Biddle & Reath
Alan Winchester, partner, Harris Beach
Maura Grossman, counsel, Wachtell, Lipton, Rosen & Katz
Jennifer Keadle Mason, managing partner, Mintzer, Sarowitz, Zeris, Ledva & Meyers