Saturday, February 28, 2015

Tim Vogus Responds

[At the end of last year, I wrote two posts (here and here) about a recent paper by Vogus, Rothman, Sutcliffe and Weick (2014). It's my policy always to inform people whose work I criticize directly on my blog and to offer them a chance to respond. Today I received the following from Tim Vogus, which I am happy to post here in its entirety. I'm grateful to Tim for taking the time to engage with my critique and I will be returning the favour in my posts next week. For now, let me just say that, as is the nature of blogging, my remarks were perhaps a bit too blunt when they weren't a little too pointed, and I'm hoping I'll be able to temper the impression in my upcoming posts that I find "no value in the work". Here, in any case, is Tim's response.]


Hi Thomas,

Thank you for giving our work a close read and taking the time to write two interesting posts about it. Even when someone finds no value in the work, it’s appreciated when they take it seriously. I am taking the invitation to respond to your posts to both clarify what we were trying to accomplish in our paper as well as to illustrate how the right/wrong frame that you apply to thinking about emotional ambivalence is unnecessarily limited and restricts our ability to integrate work on mindfulness across the setting that matters to you (scholarly writing) to the one that we theorize (high-reliability organizations).

In your second post on our paper you argue that you, as a writing coach, “help people establish and maintain a process that reliably produces publishable prose.” That is a critically important distinction that merits unpacking. Specifically, your work is an attempt to get people to become reliable (i.e., move from paralysis to writing) whereas our focus is on sustaining reliability. In the former, it makes complete sense that you would want to reduce the heightened emotions that inhibit doing the work. In fact, Kathie Sutcliffe and I have repeatedly made similar claims about the importance of routines for enabling mindfulness (as have Dan Levinthal and Claus Rerup in their excellent 2006 paper) in an educational context in 2012 in the Academy of Management Learning & Education and in a health care context in Medical Care in 2007. In the latter paper, we empirically demonstrate that high levels of mindful organizing paired with well-developed routines improves organizational reliability (in the form of fewer medication misadministrations). So our work definitely reflects an appreciation for routine as a foundation for mindfulness and becoming highly reliable. A similar focus is also evident throughout the classic studies of military high-reliability organizations.

So I would argue that your insights are correct under certain conditions – when individuals or collectives are on the path to becoming highly reliable they need an infrastructure of routines to free up attention to listen (or watch) for signs of deviations. Without those routines, detecting deviations is not possible because everything is noise with no clear expectations. But once you are highly reliable you face a different challenge how do you hold on to the requisite high levels of energy to sustain being mindful. We propose two mechanisms for when that is the case.

As for your critique regarding job design (i.e., complex and contradictory) – you may be right! The section you excerpt is from our discussion of directions for future research. It is fundamentally an empirical question. We were posing the idea of complex and contradictory jobs as one possible mechanism for eliciting emotional ambivalence and, in turn, sustaining (not creating) mindful organizing. But I think you are wrong to dismiss it out of hand. Specifically, because there are actual highly reliable organizations that design work in precisely this way. For example, in wildland firefighting there is the so-called LCES structure (e.g., Weick, 1996) that balances the faith in capabilities to detect weak signals of changing conditions and respond swiftly to them via lookouts and communication links. That embeds hope in the system. At the same time escape routes and safety zones are also in place. These simultaneously instill doubt in the system in the form of a recognition that things can fall apart rapidly and unexpectedly. There is also evidence that work is complex and contradictory in the form of the sets of elaborate cross-checks and committees at the Diablo Canyon nuclear reactor (Schulman, 1993). It is argued to be relevant to sustaining highly reliable performance because it works to curb hubris and bullheadedness (i.e., introduce doubt) as well as create a provisional alignment among those maintaining reliability. So wildland firefighting and nuclear power production represent systems that implement systems of work that are complex and contradictory as means of generating ambivalence and sustaining mindfulness and reliability. And it is a complexity and contradiction intelligently deployed to balance a system. Not arbitrary noise thrown into a system with no care like in your university example.

Moreover, I don’t think any HRO scholar would agree with your assessment of universities as well-designed high-reliability organizations. And it further has nothing to do with arguments we actually make in our piece. We are theorizing how organizations that are already highly reliable might sustain their performance over long stretches of time. We know that prolonged periods of success, especially will respect to “dynamic non-events” (Weick, 1987) like consistently safe performance, create simplifications, drift, and potentially collapse (e.g., Miller, 1993). Emotional ambivalence is one plausible mechanism that could keep the attention, tension, and vigilance alive that would allow for the detection of weak signals of impending danger such that they can be arrested before they amplify and generate harm. We make no arguments whatsoever about how organizations become highly reliable. As a result, for the university example to be relevant you would have to establish that the university is highly reliable in the first place and then explore whether ambivalence might be helpful or not.

Thus, our use of emotional ambivalence is not as a virtue in and of itself. That may or may not be the case. Our argument is simply that in systems that are performing in a highly reliable manner the tension introduced by emotional ambivalence can be constructive because it heightens attention and makes one open to alternate perspectives.

In this response I’ve tried to make clear that our paper attempts to solve a specific theoretical problem in the literature on high-reliability organizations and is not intended to generalize to all people and all things at all times. But your critique offers a helpful reminder that becoming highly reliable and sustaining high reliability might be qualitatively different in the ways described above. I hope my attempt at clarification and integration helps to advance the conversation and moves us away from “tussles” and binary thinking in favor of carefully contextualized arguments and collaborative synthesis.

Thanks for the opportunity to respond and please construe this response as only reflective of my interpretations and not necessarily my co-authors.

Best,
Tim

P.S. I didn’t mention your ASU example in this response because even after reading it several times I have no idea what it means or how it relates to our piece.

[I respond here.]

References

Levinthal, D. A., & Rerup, C. 2006. Crossing and Apparent Chasm: Bridging Mindful and Less Mindful Perspectives on Organizational Learning. Organization Science, 17(4): 502-513.

Miller, D. 1993. The Architecture of Simplicity. Academy of Management Review, 18: 116-138.

Schulman, P. R. 1993. The Negotiated Order of Organizational Reliability. Administration & Society, 25(3): 353-372.

Vogus, T. J., & Sutcliffe, K. M. 2007. The impact of safety organizing, trusted leadership, and care pathways on reported medication errors in hospital nursing units. Medical Care, 45(10): 997-1002.

Vogus, T. J., & Sutcliffe, K. M. 2012. Organizational mindfulness and mindful organizing: A reconciliation and path forward. Academy of Management Learning & Education, 11(4): 722-735.

Weick, K. E. 1987. Organizational Culture as a Source of High-Reliability. California Management Review, 29(2): 112-127.

Weick, K. E. 1996. Fighting Fires in Educational Administration. Educational Administration Quarterly, 32(4): 565-578.


Friday, February 27, 2015

How to Take a Moment

A good writing process is just a dependable series of writing moments. If you know how to give yourself the time and the space to write a paragraph about something you know then you've obviously got a valuable skill. The aim is to have at least forty, and up to 240, such moments during four eight-week periods every year. This morning, I want to share my approach to any one of them. The ability to take a moment to write at will is the basic skill that I coach people in.

It always begins the day before. At the end of the day, either as the last thing you do at work, or the last thing you do before going to bed, take a few minutes to decide what and when you are going to write tomorrow. It's a good idea to have this planned out in advance, i.e., to have a regular routine of starting at 9:00 AM for example, and to be working on a text that gives you an outline to fill out over a few days and weeks. But the essential thing is that at the end of each day you make a conscious decision about which paragraphs you will write tomorrow and when you will write them. Assign each of them a particular 27-minute* time slot and a central claim. Book the time in your calendar and write the central claim down as a simple, declarative sentence you know to be true. Then try not to think about it until the appointed time. Not thinking about it will itself take a bit of practice. It's part of the discipline.

When the moment arrives the next day, start exactly on time and resolve to stop exactly 27 minutes later, no matter how it goes. (That's what time is for: to be arbitrary.) Begin by typing your key sentence. Then just, as it were, know what you're talking about until the time runs out. Take a three minute break and go on with your day. This could be either another paragraph (that you decided on the day before) or whatever else is in store for you.

_________
*This number is of course a bit arbitrary. For some people, in some situations, I also recommend trying 13- and 18-minute moments.

Wednesday, February 25, 2015

Process, Product, Moment

A process unfolds in time. Its product occupies space. A moment is a coordination of time and space. When Virginia Woolf said that she needed "money and a room of her own" to write she meant that she needed a moment for herself. (A room is a space. Time is money.) Henri Bergson said that "time is that which keeps everything from happening all at once". Space, I add, is that which keeps everything from piling up in the same place. As a "process philosopher", Bergson also believed that everything is always happening, that nothing simply "is", that every being is forever also becoming, that the ostensible "product" is merely a stage in a longer process. Even a mountain does not simply exist. It endures.

In a moment, a "here and now", space and time find their finitude, a volume and a duration, in imagination, which is infinite. A process may be very long or very brief, but it is never unimaginably long or brief; a product may be very big or very small but never unimaginably so. While imagination is infinite, we might say, an image is not; an image suggests a finite space, a finite time, though it is itself nothing and nowhere. In a moment, we form an image of something by associating a product with its process, or a process with its product. We see the drawing, for example, and imagine making it with our hands. Or we make the drawing as we imagine how it will look when finished. In that moment we experience the thing definitely; we imagine how we make it look on the page. To imagine is to appreciate one's finitude.

P.S. As I was writing this post, it at one point occurred to me that I should apologise for how "philosophical" it is. I hope I'll be forgiven. I'm trying to suggest, I suppose, that Kant, Kierkegaard, Bergson, Heidegger and, say, Deleuze were "making progress". On Friday, I'll try to translate these reflections into some practical advice for writers. Today, then, I'm inclined to agree with them that their problems are profoundly metaphysical. Why does it embarrass me to work earnestly on sentences like this? Why is it strange to want to get them right?

Monday, February 23, 2015

The Writing Moment

"Suffering is one very long moment."
Oscar Wilde

I think I have a contribution to make to composition studies. The core of what I call Writing Process Reengineering is the composition of an individual prose paragraph, consisting of no less than six sentences and no more than two-hundred words, in exactly 27 minutes. The idea is to analyse the "very long moment" between the decision to write something (like an article or a dissertation or a book) and its submission (to a journal or a committee or a publisher), which a writer really does often experience as a species of suffering, into discrete, finite writing episodes. (The key "discovery" in my own dissertation work, I should note, was that our beliefs and our desires are articulated—joined together, jointed—in our suffering. But I had not yet discovered the paragraph as its literary situation.) The paragraph locates "the writing moment" within the writing "process". This moment may be painful or pleasurable but, owing to its finitude, is not intolerable, not insufferable. It constitutes the when and the where of our composure.

Since its emergence in the 1970s, writing process theory has had an enormous influence on writing instruction, witnessed in part by the predictable emergence, in the 1990s, of a "post-process" movement. My contribution to composition studies, if I have one, will have to be made within this ongoing conversation, and I still have some work to do in defining its contours. But as far as I can tell it is a familiar "post-structuralist" engagement, in which an established theory is challenged mainly on the grounds that it is a theory in the first place, normally by invoking a number of "postmodern" insights about the nature of language that, interestingly in this case, often originally emerged from an intense awareness of the importance of writing to so-called "modernity". This sort of "critique" is not mere criticism. It does not identify ways in which the theory gets its object wrong, but rather ways in which theorising the object is always already a mistake, even an act of violence.

To a certain extent I agree with the post-structualists about the writing process. While I'm not very impressed with the way they read, say, Derrida, Foucault, and Barthes, I do think we should be cautious about reducing the process by which a text is made into simple stages like "prewriting", "drafting", "revising", and "editing" that lead to the final, publishable result. I think the written product has suffered terribly under the belief that the means, if you will, justify the ends. Instead, I encourage writers simply to have 40 or 80 moments, over a period of 20 to 40 days, during which they are actively engaged in "knowing something" in writing. The aim is not just to "produce text", after all, but to write. And it is the articulation of the author's long moment of suffering, from decision to submission, not into stages to be passed through, but into brief moments to compose, that make writing, as such, possible.

This week I'm going to be writing more about this, trying to see if I really do have something substantial to contribute to the conversation. Maybe I've finally found my field.

Saturday, February 21, 2015

Philip Roth's Complaint about Wikipedia Revisited

I believe that Wikipedia is one of the most important institutions of knowledge in the world today. Unfortunately, that statement still has to be qualified with the familiar “for better and for worse”. Indisputably, however, it is already having an enormous effect on the way we construct the boundary between the known and the unknown and, in the future, I believe its potential to provide “access to the sum of all human knowledge,” as Jimmy Wales hopes, will be tested and, I too hope, demonstrated. To that end, it is of the utmost importance that those of us who are interested in the institutions that support (and sometimes obstruct) the knowledge enterprise understand what Wikipedia is and how it works. Fortunately some work is being done in this area.

(I immersed myself in the virtual reality of Wikipedia a few years ago and I will one day write a proper account of what I learned there. This blog post can serve as a kind of introduction to that work. Indeed, it may become the introduction to a journal article about my experiences. But, to be clear, let me emphasise that I did not participate in any way in the events that I’m about to describe here.)

Last year, Dariusz Jemielniak published Common Knowledge, a book that is bound to become obligatory reading for anyone interested in Wikipedia. It is both a first-hand account of Wikipedia and an ethnography of its culture. While I don’t feel qualified to assess its methodology, and am, indeed, biased against ethnography as a scientific method, perhaps especially when studying something like Wikipedia, the book is clearly written by someone who has a deep and rich store of experiences to draw on when writing about the subject. My experiences largely confirm his, so I highly recommend the book for anyone who wants to know what goes on behind the scenes at Wikipedia. When I finally write my own contribution to this literature, my approach will be somewhat different, but I’m certain that my work will ultimately only build on the foundation that Jemielniak has laid.

Perhaps that’s why I find it so important to do what I’m going to do in this post, namely, correct him on a small but telling point of detail. The Devil, I believe, is in these details, and when we get them wrong we distort our sense of the entire project. I should note that I have contacted Jemielniak by email and he agrees that his account of the events in the book misconstrues what happened. It’s good to see that sometimes people can admit their mistakes and stand corrected. Others might want to take notice.

In September of 2012, Philip Roth caused a stir (especially among Wikipedians) by publishing an “Open Letter” in the New Yorker that recounted his irritation over his failed attempts to get Wikipedia to change its article on his 2000 novel The Human Stain. (Already at this point it’s important to keep in mind that Wikipedia articles are constantly changing. You can read the current version of the article here. But the version that he was complaining about is archived here.) “I am Philip Roth,” the letter begins inauspiciously, and goes on to say that the Wikipedia article on his novel “contains a serious misstatement that I would like to ask to have removed.” In fact, he had already tried, he explains, to have it removed by other channels, but had now been forced to go public with his concerns because Wikipedia refused to do as he asked.

Let me pause at this point and note that much of the subsequent discussion turned on what is known in literary circles as “the intentional fallacy”, i.e., the question of how important it is to ask Philip Roth what he thinks when writing about a novel he has written. While I do have a view on that question, the point of my disagreement with Jemielniak really doesn’t depend on how we answer it. (Philip Roth clearly thinks it is important that he is Philip Roth and that the article in question is about a novel that he, namely, Philip Roth, has written. After all, he is Philip Roth. I am not. And I don’t.) Our real disagreement turns on matters of fact that, as far as I can tell, are entirely objective and beyond reasonable dispute. In fact, as I read those facts, it is Roth’s letter and not any version of the Wikipedia article that contains a “serious misstatement”.

“My novel The Human Stain,” Roth claims, “was described in the entry as ‘allegedly inspired by the life of the writer Anatole Broyard.’ (The precise language has since been altered by Wikipedia’s collaborative editing, but this falsity still stands.)” None of this is true. As far as I can tell, the article never contained* the phrase “allegedly inspired” and even the “falsity” that the book was thus inspired was never stated, only mentioned, in the article. In all cases, and certainly at the time of Roth’s complaint, the claim was sourced to a named literary critic, Charles Taylor, and Roth’s rejection of it was duly noted and sourced (to an interview with Bloomberg). That is, Roth’s account of the facts in this case is highly misleading.

And yet when writing about it in his book Jemielniak swallowed it largely whole, producing an account of the “incident” that, I’m sure, Roth would take as a vindication of his outrage. I think this account is also, unfortunately, the default understanding of the event because it fits into a larger narrative about Wikipedia’s, if you will, silliness.*** The story can be found on page 21 of Common Knowledge, which, like I say, is worth engaging with precisely because it is likely to form the foundation of much subsequent research on Wikipedia culture. Here, in full, is Jemielniak’s account:

Over the years, both the Polish and the English Wikipedias have increased their requirements for sources. In some cases, the results are absurd. For example, in September 2012 the American writer Philip Roth issued an open letter to Wikipedia in the New Yorker. He politely explained that he had tried to correct a misunderstanding about the origins of the story in one of his books, The Human Stain, on Wikipedia. One of the English Wikipedia administrators refused to permit the changes, because authors cannot make claims about their own work without confirmation from published secondary sources. Immediately after publication of Roth’s letter the Wikipedia entry in question was amended, as it now met the requirement of a published source, and the entire incident was accurately reflected in the entry, but the incident shows that the sources and verifiability policies are taken extremely seriously on Wikipedia, to absurd results.

The major claim of this paragraph (stated twice) is clearly that the incident is an example of the “absurd results” of Wikipedias “extreme” sourcing requirements. But beginning with its characterisation of Roth’s at least grumpy and arguably indignant letter as a “polite explanation”, Jemielniak has misunderstood pretty much everything that happened. What Jemielniak describes as Roth having “tried to correct a misunderstanding”, actually began on August 20, 2012, when an anonymous editor who identified himself as Roth’s biographer, removed 640 characters from the article. They constituted a full two-sentence paragraph:

Salon.com critic Charles Taylor argues that Roth had to have been at least partly inspired by the case of Anatole Broyard, a literary critic who, like the protagonist of The Human Stain, was a man identified as Creole who spent his entire professional life more-or-less as white.[1] Roth states there is no connection, as he did not know Broyard had any black ancestry until an article published months after he had started writing his novel.[2]

The square brackets mark footnotes, i.e., sources. These were:

[1] Taylor, Charles (April 24, 2000). "Life and life only". Salon.com.
[2] Philip Roth interview at bloomberg.com

That is, Roth’s biographer was not “trying to correct a misunderstanding”, he was trying to expurgate all mention of a theory about Roth's work, even Roth’s own opinion about it. The biographer’s edit was quickly “reverted”, i.e., the paragraph was restored. The biographer then removed it again, and it was once again restored. Apparently there was now some behind the scenes correspondence between Roth (or his biographer) and Wikipedia officials, which resulted in the open letter. At this point, on September 7, a lengthy discussion ensued on the article’s talk page (which had been otherwise inactive since April of 2011). In the meantime, i.e., between the biographer’s original intervention and the publication of Roth’s letter in the New Yorker, an editor had beefed up the portion of the article devoted to the, let’s call it, “Broyard hypothesis”, finding additional sources of people making the connection, to establish that it’s a serious position to take on the novel, and certainly not something that can be removed at the mere say-so of the author’s biographer acting at “Roth’s request”. Already in the immediate discussion among Wikipedians, we find an understanding of the issues—or rather non-issues—involved. On September 8, an editor named Sylvain1972 said:

There was nothing wrong with the article whatsoever, nor with the way policy was applied in this case. The section in question was about the reception of the novel, not an endorsement of Kakutani's** theories. It reported in an entirely NPOV manner the take of a critic writing for the most respected newspaper in the country. If her speculations were unfounded, that is an issue for the New York Times, not wikipedia. For that matter, the fact that Roth contested the claim was already noted right there in the section. If Roth objected to wikipedia even acknowledging Kakutani's published review, the solution is not to have the material deleted, it is to cite acceptable sources to further highlight his objection.

Shortly after, the editor who had beefed up the article agreed:

I agree with Sylvain. I was not adding the cited sources to reject Roth's contention that he did not know about Broyard, but to show that critics at the time of his book thought of Broyard and discussed him in relation to the novel. As you said, Roth's argument is with the NY Times** and other critics, not with WP, except to the extent anyone told him that he couldn't comment on his own work.

The important point that Jemielniak gets wrong stems from thinking that Roth's letter was needed to get the facts straight. In actual fact, there was no "absurd" requirement that made the error a necessary evil of Wikipedia's policies until he wrote an obsessively detailed account of the "real" source of his inspiration. Though the article does now also cite the open letter, its basic message is the same: Some people think Coleman Silk is based on Anatole Broyard, but Philip Roth denies this.

This is one of those long posts that should probably really be turned into something more “serious”. (In such an article I would go on to unpack the issues that the Wikipians were discussing, and explain some of their jargon. Again, I recommend Jemielniak’s book to anyone who wants to learn.) Or, perhaps, it is an indication precisely of what the blogosphere is capable of in the way of alternatives to traditional scholarly publication? Like I said at the ourset, it’s also a good lead-in for a discussion of my experiences with Wikipedia. (Watch for it. Here, or perhaps elsewhere, I haven’t decided.) I hope it can stand as an example of the sort of inquiry into Wikipedia that might, eventually, guide our integration of it into ordinary academic practices, as I have argued we should try to do before.

____________
* This is of course a strong and categorial statement to make. I will stand corrected the moment someone shows me the version of the article where the phrase appears. I’ve used Wikipedia’s revision history search tool, supplemented with some manual sampling, to determine to my own satisfaction that I’m right about this. Do note, however, that even if the article had used the word “allegedly” it would not actually be claiming that Roth was inspired by Broyard, only that an allegation to that effect had been made. I’m not sure that Taylor’s speculation counts as an "allegation", however. And I think the fact that Roth thinks of it in those terms says more about Roth than he is himself perhaps aware.

**These references to Kakutani in the the New York Times have to do with the edits that were made after Roth's biographer had intervened. The theory was originally sourced to Charles Taylor in a Salon article.

***[Update: In his review of Jemielniak's book at Forbes, George Anders cited the incident as an example of how "tiresome" Wikipedia can be. In an article called "Who Killed Wikipedia" at the Pacific Standard, Virginia Postrel adopts Jemielniak's gloss that Roth was trying to "correct a description of the origin of his novel" and describes it as a "notorious case" of how Wikipedia's "paradoxical culture" works. At the time of the incident itself, the Guardian got it wrong, siding with Roth, as did ArsTechnica. In both of those latter cases, it seems to me we're just talking about lazy journalism. They simply take Roth at his word about what happened and the tell the story from his perspective.]

Wednesday, February 18, 2015

Advice and Evidence

I've mentioned before that one of the benefits of blogging is that it allows Thomas Presskorn to contribute to your thinking. It happened again the other day, when he suggested that evidence might serve some function in shaping "empirical generalizations" about writing instruction. My immediate reaction, which I'd like to spend a few moments in this post to stand by, was NO! My approach does not put forward empirical generalizations at all; if anything, it proffers normative ones. Thinking about it some more, I realize that I am resistant to giving evidence for my views about writing, but very willing to give advice.

That almost sounds like a confession. So let me tell you why I'm not embarrassed to say it. My advice derives from many years of experience as a writing coach, which is to say, many years of giving advice to authors and then helping them to reflect on the consequences of trying to follow it. I understand my own advice; that is, I know exactly what it is I'm asking the author to do. And when we look at the results I know exactly what went wrong if it did. That's not a boast; it just indicates how simple my advice is. The art of coaching is grounded in carefully observing the effects of the instructions you give on the person you are coaching. You take that experience into your next session with that individual, and then into subsequent sessions with other individuals. The author becomes better at writing; the coach becomes a better coach.

Remember that I begin with a writer that wants to become a better writer. Unfortunately, this presumption may not inform all academic writing instruction. The idea is sometimes to make students write better, often despite themselves; we don't often enough speak to the part of the student that wants to learn. I do, even when talking to first-year undergraduates. And that's why I reject any demand for "evidence" that my approach works. When I'm selling my coaching to universities, for example, I simply describe what I'm going to do, and what I'm going to tell the participants to do. That has to be enough. If doesn't sound like an obviously sane approach to writing, then you shouldn't hire me.

The problem with evidence is that it can be used to justify using a pedagogy that neither the teacher nor student understands. The "evidence" might show that people write more, or get published more, if they are taught a particular model of the "writing process". Or they might give better evaluations. Or they might even get higher grades. But if the teacher doesn't really understand what showing them the model accomplishes (they might have been "taught" the model, but did they really "learn" it?) more harm is being done than good. After all, as Wittgenstein would no doubt point out, the fact that someone writes something after being shown a model of a process or a rule of composition does not mean that they are, in any simple sense, "following" the model or the rule to arrive at the end result. I very rarely get the sense that the actual on-the-page quality of student writing is ever examined closely in looking for "evidence" for one or another writing pedagogy.

If someone invented a pill that makes you write better, I'd demand evidence for its effect, and its lack of harmful side-effects. That's because a pill works (or not) even if you don't understand what it does. Advice only works if you know what you're supposed to do. So, when you are giving advice, you can trace the result back to the writer's understanding of that advice by asking them what exactly they did when they thought they were following it. When people don't get anything out of my advice it is, increasingly (my advice is getting better and better), because they didn't do what I told them to. Or they earnestly tried, but had misunderstood me.

Empirical generalizations about "effects" are based on evidence. I provide normative specifications, let us say, framed as advice. Here, as elsewhere, Confucius is worth listening to. The "Great Learning" emerges from "watching with affection the way people grow". That's how advice works. The demand for "evidence" only gets in the way.

Monday, February 16, 2015

Assertion and the Writing Moment

My engagements with the field of composition studies is bringing more and more results. Most recently, I discovered that we're about fifteen years into a movement toward "post-process" writing instruction. I was struck in particular by an essay by Gary Olson in a seminal book called Post-process Theory: Beyond the Writing-process Paradigm, which carries the intriguing subtitle "Abandoning the Rhetoric of Assertion". While I agree with post-process scholars like Olson that there can be no single, unified "theory of writing", I have a feeling that the confidence that I have in my own advice makes me sound otherwise. In truth, I think my advice is probably more pre-process than post-process. I don't have a theory of the writing process so much as an approach to the writing moment. I believe that academic writing can be described, not by modeling any particular process as it unfolds over time, but by suggesting what happens every time we sit down to write a paragraph. And here, unlike Olson, I do not think it is wise to "abandon the rhetoric of assertion."

I've written about this before. Almost a hundred years ago, Bertrand Russell introduced Wittgenstein's Tractatus by declaring that "The essential business of language is to assert and deny facts." Wittgenstein himself would later abandon this view, and rhetoricians have always known that it is mainly a philosopher's conceit.* In real life, language has lots of other business to conduct, and even to insist just that assertion is somehow the "essential" business is arbitrary. That is of course Olson's point; if we want to teach students how to write, i.e., to master written language, we're going to have to teach them much more than merely how to assert and deny facts. But to go from this to outright abandoning the business of assertion is, to my mind, to go too far. Today I want to suggest why, and to suggest an alternative.

While it's an easy and glib thing to say, let me begin by pointing out that Olson's argument for abandoning the rhetoric of assertion takes the form of a series of clear, coherent paragraphs that, by and large, each makes an assertion and offers support for it. (This sort of "performative contradiction" is a familiar feature of discourse and is nothing to be ashamed of. Sir Philip Sidney wrote his Defence of Poesie in prose; Dante wrote his argument for demotic Italian in Latin.) That's because he's making an academic argument. The paragraph is the unit of scholarly prose composition because it is ideally suited to the statement of a fact, to assertion.

I am subject to fits of what Wallace Stevens called the "rage for order". So while Wittgenstein and the post-process composition instructors would have us despair at establishing clear boundaries between different kinds of writing, I am going to tell you that in addition to the assertive moment of writing, which is the occasion for academic or scholarly or "scientific" writing, there are clarifying, intensifying and enjoining (or injunctive) moments of writing, occasions for philosophical, poetic and political writing respectively. Perhaps the boundaries are not as clear as I would like, but let's follow Ezra Pound and Rosmarie Waldrop and see form as "a center around which, not a box within which". That is, let's not focus on the boundary conditions but the core task. These tasks can be taught in the classroom and this will also give students the strength and skill to handle, let us say, "ghostlier demarcations" (stealing a phrase from the same poem by Stevens).

Let me simply state them in this post, leaving their development for later. Philosophy is the art of writing concepts down. The aim is to achieve clarity by bringing to presence the conceptual apparatus that supports our thinking. To this end, the philosopher deploys a rhetoric of clarificationexistence.** Poetry is the art of writing emotions down. The aim is an intensity that comes from bringing to presence the emotional apparatus that supports our feeling. To this end, the poet deploys a rhetoric of intensificationinspiration.** Political writing aims to empower the subject. At the center of a piece of political writing is the representation of an act that the writer is trying to persuade the reader is just or unjust. To this end, the politician deploys a rhetoric of injunction. Finally, scientific writing aims to capture an object. At its center is the representation of a fact that the writer is trying to assert or deny and, to this end, the scientist deploys a rhetoric of assertion. Mastery of each of these tasks requires practice. It comes from discipline.

To say there are four tasks is not give them equal weight, though composition teachers and composition programs are of course free to decide themselves how to distribute the emphasis. I would insist, however, that the essential business of academic writing has at least traditionally been to assert and deny facts. We lose something if the rhetoric of assertion is displaced from its central place in academic writing and in the composition classroom. We lose even more if we fail to identify it as a distinct mode of writing among other, equally distinct modes of writing. It is not the the distinction that is blurry; it is particular pieces of writing "on the boundaries" that are blurry. They are out of focus. (This can sometimes by good and proper, to be sure.) Assertion, in any case, is a large part of what students come to school to learn how to do. The teaching of other forms of writing should never involve denigrating the importance of assertion. What is important is to help students train their ability to establish a moment of assertion, a distinct occasion on which to compose a statement of the facts.

________________
*[Update: I should use this opportunity to pass on a story from my ex-wife that I've told before. While we were living in Germany, she was attending the lectures of a prominent professor of rhetoric, Joachim Knape. When the course got to the work of J.L. Austin, he resorted to a wonderful bit of sarcasm. "Ooooh," he said, "you can do things with words!" In order to appreciate the remark you have to know a little about the arch-rivalry between philosophy and rhetoric and that Austin's magnum opus is called How to do Things with Words. It must, indeed, be amusing to rhetoricians that arguably the most important work of philosophy published in 1962, after more than 2000 years of condescending to "sophists", was a book that challenged "the assumption of philosophers that the business of a ' statement' can only be to 'describe' some state of affairs, or to 'state some fact', which it must do either truly or falsely."]
**[Update 2 (04/06/15): I never really liked my original labels for the philosophical and poetic "rhetoric", namely, "the rhetoric of clarification" and "the rhetoric of instensification". I considered something more like "rhetoric of reason" and the "rhetoric of passion", which may actually be the best way to describe it. But scientists probably wouldn't want to be excluded from the former, and politicians are rarely lacking in appeals to passion. So I've decided go with a distinction between an existential and inspirational rhetoric. Not all philosophers will like the idea that their rhetoric is "existential", but I think there's a sense in which it is. Let's remember that Colin McGinn wanted to rename the whole field "ontics", i.e., the inquiry into "being". We might also say philosophy deploys a rhetoric of "extance", but now we're making up new words.]

Wednesday, February 11, 2015

Evidence and Experience in Writing Instruction

I just discovered the European Association for Teaching Academic Writing, and I'm now thinking of joining. To that end, I watched the video that was made about the 7th Biennial Conference in Budapest. It all seems like reasonable and interesting and necessary stuff. But at the 5:10 mark, Christiane Donahue, who heads up the Institute for Writing and Rhetoric at Dartmouth, made a remark that gave me pause. An organisation like the EATAW, of course, does not exist just to promote the teaching or even the practice of writing. As with everything else in academic life, it exists to promote research into the practice of teaching academic writing. Donahue made this point in very forceful, and, upon reflection, somewhat disturbing terms.

More and more, people who are teaching academic writing will participate in the kinds of things, like the EATAW, that allow them to actually develop their thinking in terms of research. There has been, for decades, a lot of practice but not always a lot of research. [...] I think that one of the changes for the future, for all of us, is that you won't be able to teach academic writing if you haven't really thought about the evidence that supports what you do, and how that evidence can shape how you're thinking about your teaching. (5:10-5:47)

At first this seems entirely reasonable, and certainly unsurprising. Universities are supposed to offer research-based teaching, so if you're teaching academic writing, it should be based on research into academic writing, right? The same thing, in fact, has happened with teaching in general, which is now supposed to implement the lessons of educational research, and educators are increasingly asked to, precisely, "think about the evidence that supports what they do".

But at some point this has to stop. It is one thing to ask English teachers to offer research-based instruction in, say, Elizabethan drama when they teach, say, Shakespeare; it is quite another to ask that both their teaching methods and writing assignments are also "supported" by evidence. To my mind, this looks like another incursion of social science into a domain that is really best managed in a humanistic spirit. I'm not against organisations like EATAW, nor even against research into academic writing. What I'm against is a future in which "you won't be able to teach academic writing if you haven't really thought about the evidence that supports what you do".

My emphasis here is on the word "evidence". One minor tweak to this statement would make it much more palatable to me. You shouldn't teach academic writing if you haven't really thought about what you do. An association and a recurring conference can help you think about what you do by sharing your experiences and hearing about others. It should be sufficient for composition instructors to discuss their classroom practices in journals and at conferences, sharing their approaches and opening themselves up to the criticism and contributions of their peers. There is no need to turn the composition student into an object of research, or, worse, a research subject.

This difference between experience-based and evidence-based teaching has been simmering for a while in the back of my mind. The distinction can be and has been applied to other fields, too, of course, like management and medicine. In all cases, "evidence-based" seems like a great idea at first. Why would we not want our educational, managerial and medical practices to be based on "the evidence that supports what we [teachers, managers and doctors] do"? But on closer inspection it introduces a new source of error. We've all learned to be skeptical of purportedly "scientific" studies that show that one or another practice "works". Just because there is "evidence" for doing something in particular does not mean it really works; in a few years, there may well be "evidence" that it doesn't. More importantly, however, even where the studies get reality right, you have to be sure that you know how to implement their prescriptions.

Intuitively competent writing instructors, who really get their students to write better prose, may not be especially competent researchers, or may be very competent researchers in fields other than composition studies. In Donahue's brave new world, they will be at risk of losing their jobs (or never getting them in the first place) to candidates who are able and willing to adopt the theories and methods of composition studies, which will quickly develop (as they already are) increasingly sophisticated theoretical frameworks and methodological approaches. The nightmare scenario that I can see looming in the future is that the composition classroom will be headed by teachers who, instead of simply being able to write and pass that ability on to students, have a demonstrable capacity for research including the famously abstruse writing that goes with it. Why not just select competent writers from within academic disciplines to teach students to do what they do? That is, why not let people who have a demonstrated ability to write instruct the next generation of writers from experience?

Continues with "Advice and Evidence".

Wednesday, February 04, 2015

Susan Blum on Authenticity and Performance

I want to get back to the issue of patchwriting. Rebecca Moore Howard coined the word back in the 1990s to capture the practice of "copying from a source text and deleting some words, altering grammatical structures, or plugging in one synonym for another", something that many people, including myself, see as a straightforward and rather common form of plagiarism. While Howard does approach it as a clear sign of immaturity in a writer, she also, perhaps by the same token, thinks we should be tolerant about it. Much as we tolerate transitional stages in childhood and youth, I guess. But, if you ask me, childishness and rebellion in young people is something that adults should be challenging (in appropriate ways, at appropriate times) as part of helping them grow up. Understanding why young people do bad things (or do things badly, if you prefer) does not mean we have to condone their behavior (or praise its results).

This is exactly what Susan D. Blum seems to me to do. In My Word! (Cornell, 2009), she has provided an anthropology of the culture in which patchwriting is a common and accepted practice. She interprets Howard's stance, rightly, as "permissive" (MW, p. 27), but then goes on, surprisingly, to outright "encourage" it (p. 26). I suspect I know why. Like her modern, enlightened colleagues in anthropology, she does not pass judgment on the culture she studies; she remains, instead, intensely aware of her own foreignness. "After teaching for twenty years," she tells us, "I had come to suspect that my own training as an academic had made me a member of what is almost an entirely foreign culture in contrast to that in which our students live" (MW, p. 7). But the people she had studied are the youth of her own culture, not merely "Western", not merely "American", but, precisely, "academic", i.e., college culture. That is, instead of engaging with her students as people she is supposed to help into maturity, she is marveling at how strange and exotic they are. She is forgetting that she is supposed to pass her "training as an academic" on to her students.

This has profound philosophical implications. According to Blum, academic culture values "authenticity" but young people do not. They have what Bloom calls a "performance" self, which is more interested in "doing the done thing" than being true to itself (MW, p. 63f.). For Blum, this is a shocking discovery and poses profound new problems for college teachers. But hasn't this really always been the problem of education? Isn't getting an education very much a matter of learning how to be yourself in public, in part by deciding who you are? While Blum does think plagiarism is best combated through instruction (not punishment), and I agree with this (who doesn't?), she doesn't hold out any hope for helping them to establish a "tight connection between their words and their inner being". She seems to think this is an outdated cultural norm.

Blum believes that technology has brought us into a new era of "performance". There's a sense in which I think she is right, but it's one that we've long been familiar with. After all, while it may be true that plagiarism is a relatively new offence—not something that, say, Dante and Shakespeare were very worried about—the injunction to "know thyself" and "to thine own self be true" predate the problem of copying other people's words by a wide margin. That is, stealing other people's writing merely became a new and distinctive way of faking it with the invention of the printing press. (In a time when all copies of books were hand-made, plagiarizing one probably wasn't as tempting.) But the act of pretending to be something you're not is no doubt as old as language. (It's as easy as lying.) The Internet probably intensifies the problem, but it should not change the academic mission, which is not hundreds of years old but thousands. Academics have always known (and have only recently seemed to forget) that their job is to teach young, inauthentic, performance-oriented people to be themselves, i.e., to make up their minds, to speak them, and to write down what they've got on them. We're supposed to teach them how to be authentic adults, not prolong their adolescence.

I am genuinely worried that the culture of patchwriting is a sign that we're giving up on the ability of young people to be themselves. I think it's rooted in a common affectation about youth among soi-disant adults—one that treats our own progeny as members of an exotic, foreign culture. I think it's time for this theme to hear, again, its counterpoint: "The great learning," said Confucius, "takes root in clarifying the way wherein the intelligence increases through the process of looking straight into one’s own heart and acting on the results; it is rooted in watching with affection the way people grow; ..." Yes, that means we have to watch them engage in all kinds of embarrassing fakery. Our job is to hold a mirror up to their nature, to show them the features of their own virtues and vices. And then affectionately watch them grow.