When Can You Trust a Data Scientist?

by John Sides on July 21, 2013 · 1 comment

in Data

This is a guest post from Andrew Therriault, a political science Ph.D. who is Director of Research and Business Development for Pivotal Targeting/ Lightbox Analytics.

*****

Pete Warden’s post “Why You Should Never Trust a Data Scientist,” which Henry Farrell linked to, illustrates one of the biggest challenges facing both consumers and practitioners of data science: the issue of accountability. And while I suspect that Warden—-a confessed data scientist himself—-was being hyperbolic when choosing the title for his post, I worry that some readers may well take it at face value. So for those who are worried that they really can’t trust a data scientist, I’d like to offer a few reassurances and suggestions.

Data science (sometimes referred to as “data mining,” “big data,” “machine learning,” or “analytics”) has long been subject to criticism from more traditional researchers. Some of these critiques are justified, others less so, but in reality data science has the same baby/bathwater issues as any other approach to research. Its tools can provide tremendous value, but we also need to accept their limitations. Those limitations are too extensive to get into here, and that’s indicative of the real problem Warden identified: as a data scientist, nobody checks your work, mostly because few of your consumers even understand it.

As a political scientist by training, this was a strange thing to accept when I left the ivory tower (or its Southern equivalent, anyway) last year to do applied research. The reason for a client to hire someone like me is because I know how to do things they don’t, but that also means that they can’t really tell if I’ve done my job correctly. It’s ultimately a leap of faith—-the work we do often looks, as one client put it, like “magic.” But that magic can offer big rewards when done properly, because it can provide insights that aren’t available any other way.

So for those who could benefit from such insights, here are a few things to look for when deciding whether to trust a data scientist:

  • Transparency: Beware the “black box” approach to analysis that’s all too common. Good practitioners will share their methodology when they can, explain why when they can’t, and never use the words “it’s proprietary” when they really mean “I don’t know.”

  • Accessibility: The best practitioners are those who help their audience understand what they did and what it means, as much as possible given the audience’s technical sophistication. Not only is it a good sign that that they understand what they’re doing, it will also help you make the most of what they provide.

  • Rigor: There are always multiple ways to analyze a “big data” problem, so a good practitioner will try different approaches in the course of a project. This is especially important when using methods that can be opaque, since it’s harder to spot problems along the way.

  • Humility: Find someone who will tell you what they don’t know, not just what they do.

These are, of course, fundamental characteristics of good research in any field, and that’s exactly my point. Data science is to data as political science is to politics, in that the approach to research matters as much as the raw material. Identifying meaningful patterns in large datasets is a science, and so my best advice is to find someone who treats it that way.

{ 1 comment }

Brond August 3, 2013 at 8:40 pm

If data science aspires to be science, then the insights derived from apply the various inferential techniques must lead back to a falsifiable proposition about the world which can be tested in the data for significance. Alternate explanations can be proposed and eliminated. Data science labors under the burden that even its “observations” involve assumptions about samples, statistical independence, and randomness that should really be back- checked and cannot be taken on faith. The best scientific analogy might be the design of particle physics DETECTORS, which also deal with torrents of irrelevant data. But there the problem is simplified by conservation laws of energy, momentum, etc, which can be taken as true for the detector design (and back checked). In big data the problem is signal to noise. To paraphrase the mathematician John Tukey a little–if you torture big data long enough it will confess to anything. Of course, the difference between an inference that may be commercially actionable and one that would be scientifically publishable may be significant. Before trusting the inferences or predictions of the Data Scientist, with Nassim Taleb, if he has “skin in the game”.

Comments on this entry are closed.

Previous post:

Next post: