In a talk sponsored by the PhD program in English, Nan Z. Da (Johns Hopkins) addresses a consensus in the humanities portion of the data sciences. The consensus is that such a science discovers discrimination and bias, identifying patterns scientifically and at scale. On the ground, such a science employs statistical tools for textual- and metadata analysis, often applied using principles of computational linguistics and sociological demographics. One hope of the digital humanities is that this branch of data science becomes a force for good in the world. Such a science would not only reform the methodologies of the literary humanities it would also subvert the technocratic data sciences from within. Data feminists critique data science when it “reinforces existing inequalities” and wish use data science “to challenge and change the distribution of power.” In other words, the computational literary humanities believes, and often rightly so, that it meets you in one of the final frontiers of humanistic inquiry. Here are questions of data and justice, absolute equality, and the possibility of a complete record of reality. Literary studies in turn understands prejudice as a scientific object that has strange relationship to empiricism. It is both hidden and flagrant, aided and hurt by the quantitative sciences, moving oddly between micro-harm and macro-harm. Bringing it to justice suffers from the poverty of examples-- witnesses are so scarce, similar events so under-recorded, that the victim can only use her own example. It also suffers from the abundance of examples, or demographically significant exemplarity-- the fact that other people's quantifiably recognize behaviors influence our opinions of the group, and that genre recognition increasingly overlaps with race science. More fundamentally, literature understands prejudice's relationship to alibis, a consideration that expands by an order of magnitude the boundary of the empirical.
Professor Da’s talk is free and open to the public.