Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Sunday, January 29, 2023

Smith on Sympathy and Selfishness in the Wealth of Nations, Book 1, Chapters 1-3

I'm hardly the first person to note that "The Adam Smith Problem"--that apparent contradiction between the sympathy ascribed to humanity in The Theory of Moral Sentiments and the self-interestedness ascribed to humanity in The Wealth of Nations--is itself problematic, based as it is on a mistaken premise about what Smith actually argues in these texts. But, revisiting Smith this weekend, I'm struck by the fact that these two texts are more attuned to each other than even mainstream Smith scholarship seems to think. (I say this knowing that, despite relatively wide reading at this point, I still haven't scratched the surface of Smith scholarship...so it's both possible and likely that I've missed a source that takes a similar view to the one I'm about to outline!)

This term, I'm teaching a grad seminar on various conceptual (rather than historical or practical) overlaps between narrative art (especially opera) and economic theory in the 18th century. The purpose of the course is not to ask how the economics of performance or artistic production worked back then, but rather to investigate how the various ideas and preconceptions that gave rise to the birth of economics (especially to Smith's writings) also structured the way various artists were thinking about character, narrative, plot, and psychology at the time--that is to say, how the conceptual structure of early economic thinking enabled a certain kind of artistic output to arise. We've spent the first few weeks of term grappling with various musicological texts on market culture and its effect on the works of Haydn, Mozart, and Beethoven especially--but starting next week, we will begin to read Smith. I opted to begin with WN rather than TMS.

My romp through WN this weekend is my second time opening this book. I read it for the first time in 2021, and now I'm rereading portions specifically with an eye to class discussions this term. Reading it the first time, I was overwhelmed by the system of thought it put across, and the encyclopedic completeness with which it communicated this system. It was also the first Smith I had read. Now as I reread it, I have the benefit of also having read TMS, his History of Astronomy, a bunch of his essays (including the excellent writings on music, the imitative arts, etc.), and some of his lectures on rhetoric.

All of this is to say, I'm approaching WN with a vastly different structure of background knowledge from what I had back in 2021. And the experience of reading even the first few chapters is indeed strikingly different from what I recall from two years ago. Here are the things that stand out to me, on the level of argument and rhetoric.

First, and most broadly--and pace all those commentators (such as Russ Roberts, whose How Adam Smith Can Change Your Life is fun, but now seems a bit misleading)--Smith does not begin WN with an appeal to selfishness. There is a famous sentence, quoted by virtually all the commentators I've read, which runs:

"It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interests. We address ourselves, not to their humanity, but to their self-love..."

Yes, it's clear that this is the ur-statement of self-interest it is often taken to be; yet I find it significant, upon re-approaching this text, that it is placed in Chapter 2 rather than Chapter 1. Chapter 1, far from promulgating (much less recommending) a view of the fundamental selfishness of people, is all about the seeming-miracle of coordination among those who work together in individual industries. Smith marvels at the productiveness that arises with the division of labor, and implies, both in this first chapter and in Chapter 3, that the mechanism which allows the general increase in prosperity, productivity, and well-being to occur is a result of mutual, imaginative sympathy as much as it might be a result of self-interest. Chapter 1 reads like proto-Hayek on the distribution of knowledge through society (and anticipates, also, the famous "I, Pencil" essay by Leonard Read. Smith portrays the most successful people as those who work together to complement each others' needs, and even in his appeal to the "limit of market power" in Chapter 3, he suggests that an awareness of other people's interests and desires is itself the factor that determines what professions each individual can pursue. In itself, this framing of the first chapter, and the fact that Chapter 2 (with the "self-interest" statement quoted above) is tucked quietly between Chapters 1 and 3, undermines the apparent distinction between the worldview encoded in TMS and that encoded in WN.

There are some other interesting quirks of rhetoric and argument I noted as I made my way through these chapters, as well. For instance, I am struck by the mode of presentation of the division of labor idea. Smith could easily have begun the book with a statement along the lines of what we read in Chapter 2; he could also have begun Chapter 1 with a clear, overarching statement of the idea he will come to by the end of the chapter. Instead, however, he begins immediately with an example. He states that it will be easier for the reader to grasp the overall concept if he begins this way--a bit inductivistic, alas, but unsurprising given the Zeitgeist--but what he actually does, right on the first page, is to make a basic point about invisibility and evidence. He states that the effects of the division of labor are greatest precisely where they cannot be observed directly: in small enterprises, he tells us, it's easy to see people working on individual components of a project, whereas in large societal enterprises the work is so widely distributed that nobody can actually see all of it happen, nor grasp the extent of division it takes to complete it. Reading this passage after having read the History of Astronomy, I'm struck by the fact that this explanation is a concrete manifestation of a point he implies w/r/t the philosophy of science, namely that the task of science is to explain the seen in terms of the unseen. He immediately appeals to, and strengthens, his reader's tolerance for arguments invoking invisible mechanisms. The effect of this is not only to lead up to the famed Invisible Hand (though I think it does that as well!), but to bolster the subsequent arguments and anecdotes concerning coordination and the extent/power of the market, which relies on a sense that an invisible network of sympathies and imaginings connects all of the individual people in society.

Even the statement concerning self-interest, in Chapter 2, is not what it seems when quoted out of context. Smith begins the chapter by distinguishing the behavior of animals from that of humans. Animals have to look cute, he says, and appeal to humans' good nature, if they want a human to pamper them. Humans, he says by contrast, are forced to think rationally about what other people want. The ensuing statement about the self-interestedness of the butcher, baker, and brewer looks on the surface like it defends a view of humanity as intrinsically selfish. But it seems to me that it rather urges people to sympathize more with those around them, for the very mechanism by which we could even imagine the desires of the baker, brewer, or butcher is precisely the sympathetic impulse taken up in TMS. Smith speaks explicitly of self-interestedness, but implicitly gives us an explanatory structure that depends entirely on sympathy, coordination, and an imaginative effort towards fellow-feeling.

Finally, I'll just note a few fun things that occurred to me while reading today. First, Smith anticipates the argument (end of Chapter 1) that it's better to be a poor person in a wealthy nation than a wealthy person in a very poor nation. He puts the point in terms of Britain vs. Africa; but of course what comes to my mind, also, is the quite uncontroversial claim that I'm better off living in middle-to-lower class America in 2023 than I would be if I could change places with even the wealthiest nobleman in 1600. (This point is underscored, also, by my current reading of Katherine Rundell's John Donne biography, which paints a grim picture indeed of many aspects of life back then.) Second, I was struck by Smith's observation that automation generally makes things better for workers--a point anticipating Milton Friedman's argument to the same effect. Friedman said in some lecture or other that the invention, say, of running water did more to alleviate the lot of the poor than of the ultra-rich, since, as he puts it, the wealthy have always had running water (carried on the backs of their slaves or servants), and that it is in fact those carrying the water who are saved from their toil by the advance of technology. For Smith, too, it is the boy who wants to play with his friends who benefits from ingeniously devising a mechanism to do his work for him. This anticipates the Suitsian Utopia laid out in The Grasshopper, in which only those who wish to work need to work, and everyone else is essentially playing games. Finally, I will just gleefully note that Smith, too, treats animals (Chapter 2) as being essentially automata, a point that slots nicely alongside my earlier, cursory musings about Descartes and ChatGPT.

Thursday, December 29, 2022

ChatGPT and the animals

 Like most people I know, I've spent the last month+ experimenting with ChatGPT. I've asked it to write poetry; I've asked it arcane questions about various books and articles; I've asked it to create imaginary dialogues, podcasts, arguments, and debates between thinkers I admire from different centuries. And like most people who have put the new platform through its paces, I've come away with a mixed appraisal of its various skills. I've been impressed by its ability to handle requests that generally fall under the banner of what people think of as "creative" tasks; its poetry, for instance, especially within certain constraints, is a particular highlight. And I've enjoyed the dialogues it can come up with, spinning genuinely interesting fictional conversations between historical figures who, in reality, never met. I've been less taken with its logical abilities, which are questionable at best, and at worst make it abundantly clear that the bot isn't actually thinking, but just parroting text back at us. David Deutsch's flying-horse question gives a neat demonstration of this: the bot flatly contradicts itself and misunderstands the question in a way that can only mean it doesn't actually know what it's saying.

Playing with the bot has afforded one kind of pleasure; but nearly as fun and interesting has been to observe various public thinkers' reactions to this new technology. In particular, I've been struck by the far-reaching claims voiced by Tyler Cowen, who for much of this year has been making cryptic references to a coming revolution in AI tech that, he argues, will entirely transform how we use the internet, as well as how we work with and produce ideas. He says that ChatGPT is just the first step on this path--"bread crumbs, not dessert"--and although he could well be right, some of the individual claims he makes seem questionable to me.

For instance, he claims in a recent blog post that ChatGPT's linguistic competence likely narrows the gap between human and animal intelligence. This seems wrong to me--indeed, it's striking that one could make exactly the opposite argument just as plausibly (perhaps more plausibly). Although Tyler doesn't flesh out his version of the argument, I assume his line of thought would run something like this: ChatGPT displays enormous competency without needing to think; the competency approaches that of humans w/r/t language-use; this suggests that we aren't quite as special as we thought, and that other program-running devices like animals (which are like versions of ChatGPT, except they're programmed by evolution rather than by OpenAI) can approach the type of skills we have; thus we differ from them in degree rather than in kind.

However, that argument says much more about Tyler's "priors" than it does about humans or ChatGPT. If anything, to my eyes an argument with the exact opposite conclusion seems even more compelling! Here goes: ChatGPT shows that a program can display enormous competencies without needing to think; animals display enormous competencies, which many people want to attribute to thought; however, we can now see a demonstration that behavior that strikes a casual human observer as thought-like might not depend on thought at all; thus, it follow that animals might not actually need to think in order to perform the tasks they can perform; and if that's the case, the gulf between humans and animals is very likely to be wider than people think!

Of course, the fact that ChatGPT demonstrates that an entity can perform thought-like actions without thinking says nothing about the question of whether some other entity does need to think. So, this argument is purely analytic. CPT's ability to write sonnets doesn't ultimately settle the question of whether rats are conscious. But the version of the argument that I've laid out seems, to me, to increase the plausibility that complex behavior by non-humans can be accomplished algorithmically. Of course, just as Tyler's version of the argument says more about his priors than it does about animals, the same is true for my version. I was already inclined, intuitively, to buy the arguments of philosophers who think that consciousness exists only in people.

Tyler's other claims about the philosophical implications of ChatGPT seem equally stretched. He asks, for instance, why the aliens haven't visited us. Now that we see how easily the trappings of intelligence can be conjured by programmers, surely evolution or some other such force must have created far more intelligence across the universe than we can know? But, to build on Deutsch's point about GPT and the flying-horses, the better conclusion to draw would be that the outward suggestion of intelligent behavior is misleading, and that genuine thought remains rare, on our planet and possibly elsewhere as well.

Wednesday, November 2, 2022

How to write long texts (without hating yourself) - Part I

This past summer marked the completion of the second long document I've written. In the past few years I've produced a book (approx. 94,000 words) and a dissertation (approx. 100,000 words), plus a bunch of shorter pieces like articles (6,000-13,000 words each) and blog posts.

When I was a full-time professional violinist, I spent a lot of time seeking out all the self-help material I could find concerning deliberate practice and technical efficiency. I've found that the same mindset has been essential for my development as a writer. Writing, like playing the violin, is a technique. And although there may be some irreducible element of inspiration and talent in both, there are ways of approaching these activities that can make them manageable and even pleasurable. The purpose of this series of posts is to gather some ideas, both theoretical and practical, about how such processes can work for the writer of long documents such as books and dissertations.

1. Why is writing hard?

The first step to doing anything difficult is to acknowledge that it's difficult, and to understand that it's ok to experience challenges. The second step is to try to figure out why it is difficult, the better to solve the problems facing you. There are many reasons why writing is difficult--some practical, some technical, some aesthetic--but for now I'll focus on a theoretical difficulty particularly pressing when writing long texts. (My framing of this idea is based on Popper's arguments concerning empiricism, theory, and observation.)

Thoughts can take any number of shapes, and they can connect to other thoughts in any number of ways. To write thoughts down is to translate a set of largely amorphous ideas into fixed, linear form, with specific words, sentence structures, and a set order. This process is an act of interpretation: one that (like all interpretations) is carried out with an overarching theory about how and why the writing should take the shape it does. In other words, when we try to capture ideas on the page, we do not only contend with the ideas themselves; we also adopt some sort of theory about what 'the book' or 'the dissertation' will end up being--and it is this theory that allows us to know how to even begin tackling the ideas in the first place. Every sentence in a document is written with a theory about what the document is. But the difficulty is that with each sentence written, the document becomes better-fleshed-out, diverges in all its messiness from the idealized theory, and, even more important, changes the nature of the theory we might hold about the project as a whole. That is to say, every sentence is both written under a theory of 'the document' and alters the theory of 'the document' that operates in the writing of future sentences. By the time one reaches the end of (say) a 90,000-word draft, one has essentially produced a document in which most of the component parts--the paragraphs, the sentences--are part of about 90,000 different conceptual books or dissertations.

Actually, perhaps this is a simplification. The speed with which theories of 'the document' change will itself be subject to change as the draft unfolds. In my experience, the first 5-10% of the document is written under a highly consistent theory governed by an outline (assuming one is using an outline!); then change accelerates as development occurs, and each sentence/paragraph exerts a bigger pull on the overall theory; and finally, after about 80% of the draft is written, the theory of the document once again becomes more stable. Of course, it's highly unlikely that the stable theory one arrives at for the final 20% of the draft is similar to the stable theory one held while working on the first 5-10%.

So, even in this weaker statement of the problem, the basic point remains that you will probably produce a first draft in which about 80% of the sentences belong to thousands of different conceptual books, none of which is actually aligned with the finished draft, which is necessarily a jumbled mess.

2. What to do about it


This way of framing the challenge of writing may seem needlessly theoretical, but a number of practical points follow directly from it. Here are some extrapolations:

1. First drafts are necessarily awful. Many writing self-help guides (including most famously Anne Lamott's Bird by Bird) acknowledge the awfulness of first drafts. But these guides tend to do so ruefully. We are told that coming to terms with the awfulness of first drafts is a kind of self-acceptance--as though we simply need to acknowledge that we aren't smart enough to write perfect first drafts. I think that this way of framing misses the point. The problem of first drafts, as I see it, isn't that "we're not smart enough to get things right the first time"; the problem is that first drafts are necessarily awful because they are written under a huge variety of mutually-contradictory theories of what 'the document' will end up looking like. In other words, the awfulness of first drafts is inherent, and there is absolutely nothing we can do to escape it. It follows that this awfulness is something to embrace and make friends with.

2. From (1) it follows, in turn, that it's best to write an entire first draft before starting to revise or edit. People take many different approaches to this practical question; and even for me, a hardened embracer of bad-first-drafts, there is always a temptation to try to revise during the initial drafting process, for no better reason than the sheer frustration one feels at producing a 200+ pages of terrible, cluttered, aimless, unpolished prose.

However, there are a few reasons not to give in to the temptation to edit. The first two reasons are practical, and have nothing to do with the philosophical framework I'm outlining. First, the more one writes, the more momentum one gathers. It's often easier to push through the pain to the end of a document when one writes in a single sweep of productivity. Second, the mindset needed to produce words on the page is very different from the mindset needed to revise what's already there. The biggest difference between the two mindsets is that the first requires a real lack of inhibition and a confidence and belief in one's powers of creativity, whereas the second requires coolheaded discernment, self-criticism, ruthless questioning of one's ideas, etc. It's rarely easy to switch back and forth between the two mindsets, so it makes sense to disentangle them, accomplishing as much as possible under the first mindset before adopting the second.

In addition to those practical points, there are some theoretical reasons to write the entire first draft before embarking on any revisions. The most important is, simply, that revisions, just like first-draft writing, are carried out under a 'theory of the book'. In order to revise any sentence, paragraph, or chapter, you need to have some idea of how it fits into the whole, and what angle you're revising for, and why; otherwise, you don't know what to keep, what to expunge, what to fix, or how. And it's inherently impossible to know these things without having a complete theory of 'the book' as a whole. And, returning to the initial point, because that theory is changing until near the end of the first draft, revisions can't really happen until then. So it makes sense to put off revising until the initial drafting has taken place.

Note that the idea of writing the entire manuscript before revising does not translate into the process of writing individual chapters. Many people I know attempt to write and polish individual chapters before completing the rest of the manuscript. (This is particularly true of academics, who often use individual chapters as conference papers or articles, and write and polish one-at-a-time.) However, there are a few problems with this approach too. First, it presupposes that you know where in the document each argument or example will belong. In everything I've written, ideas move around significantly during the course of revisions, so that paragraphs that began in (say) chapter 1 might wind up in (say) chapter 5 or 6, or vice versa. Polishing individual chapters makes such moves difficult. Second, even when arguments don't move around between chapters, when individual chapters are polished before the entire draft is complete, it can be difficult to integrate nuances of arguments between the chapters. Finally, on a psychological level, as Ayn Rand points out in The Art of Nonfiction, it can be demoralizing to go from a polished final draft of one chapter to a messy first draft of another, and this can bring its own set of emotional and psychological challenges. Again, better to get all 90,000 words of ideas out of one's system, in all their unpolished messiness, before deciding what to do with those ideas and how to shape the next stage of the document.

3. Edit the complete manuscript in iterations. Editing, like writing, will change your sense of what the document is saying. Therefore, editing should be carried out in multiple stages, since it is unlikely (or impossible) that any one round of editing will create the final version of the document. Rather, the final version of the document emerges from various layers of editing: the revision of chapter 1 will necessitate changes in chapter 6; but those changes in chapter 6 will also cause changes in chapters 1, 2, 3, 4, and 5, which will in turn cause changes in chapter 6. These large-scale processes of bringing the book "into agreement with itself," so that the individual components have ceased to contradict each other or repeat after each other, is a fully absorbing mode of editing--and it's difficult to accomplish it while also tinkering with sentences and word-choice. So, separating out the various rounds of editing can be helpful in making each one as efficient and productive as possible.

How many rounds of editing are needed? This is probably a matter of taste and personal preference. The way I think about the issue is mostly psychological: as Eviatar Zerubavel advises in The Clockwork Muse, the ideal number of drafts is high enough that no individual draft introduces the pressure of getting things perfect all at once, but low enough that the process doesn't seem endless. Of course, these calculations will be different for different people. Zerubavel recommends doing 7 drafts; so far, for long documents I think I'm a 5-draft writer. (By contrast, when I write articles, I typically need only 3 or maybe 4 drafts, since it's comparatively easy to be coherent and integrated when you're dealing with only a single thread of argument.)

My process for going through the drafts looks something like this. First, I write a complete, messy, awful first draft. I attempt to do this as quickly as possible, both because I want to ride the wave of enthusiasm, and because I know that the writing will need a lot of revision anyway, so there's no point in taking even a minute longer than is absolutely necessary to get the awful first draft on paper. I attempt to write approximately 1,000-1,200 words a day during the first draft stage, and I stop myself once I've reached this goal, even if I'm midway through a sentence (especially if I'm midway through a sentence!) because I don't want to exhaust my ideas and have nothing to say the next day. By stopping myself at a given word limit, no matter how excited I am or how well things are flowing, I guarantee that I'll be able to hit the ground running the next day. At the rate of 1,000-1,200 words a day, you can write a 90,000 word draft in approximately two and a half or three months.

Having written the draft, I then take stock. I think about the flow of argument, I think about ways to re-order the pieces, and I think about what might be missing, which sections need to be lengthened or excised, etc. I then write a second draft, moving significantly more slowly than in the first draft. My goal for the second draft is always to make sure the paragraphs and chapters are roughly in the right order. At this stage, I still do not worry about the awful prose. The sentences are a mess, but I try to get the ideas in vaguely the right place in the manuscript. When writing my book, I also experimented with retyping the entire manuscript in a new, blank document from scratch for the second draft (following the advice from Zerubavel). I found this to be extremely helpful for two reasons: first, because printing the MS and retyping it from scratch gives me simultaneously a written, fixed first draft to work on as well as the freedom of a blank document. I feel safe making changes and trying experiments knowing that I won't lose work I've already done. Second, it allows me to (subconsciously) revise some of the sentence-level prose as well, since it's very difficult to retype a terrible sentence without making tiny tweaks that improve it. Although fixing the prose isn't my goal at this point, I can make little changes in the process of typing the new draft that I probably wouldn't think to make (and perhaps wouldn't even notice) if I were just picking through the same document as the original first draft.

Having written the second draft and gotten the ideas vaguely in the right order, I then repeat the process at the level of the prose. First I take stock; then I retype the entire document once again.

For drafts nos. 4 and 5, I no longer retype the document; at this point, the main ideas are in the right place, and my goal is to make the prose flow as well as possible.

To be continued, with thoughts on outlining, stamina, and other aspects of the process of writing long documents.

Thursday, October 27, 2022

Utopia and its discontents

What comes to mind when we hear the word "Utopia"? Perhaps a particular set of texts, mostly from the worlds of philosophy or political theory. More likely, we think of a place--one that is Edenic and flawless, a paradise for its inhabitants, albeit one that is impossible to actually construct. But what I've always found strange about the idea of Utopia is the divergence between these idyllic associations that the noun-form of the word has accumulated, on the one hand, and on the other, the negative associations called up by its adjective, "Utopian." Popper and many of his followers in the tradition of classical liberalism leveled this term as a criticism against Marxism. Someone who engages in Utopian thinking chases fantasies of societal perfection while subjecting actual people to all manner of injustices. Indeed, one of the problems with Utopian pursuits is that they seem to offer a blank check to those held in their sway; since if one is pursuing what one genuinely believes to be an infinite good end, then any means needed to achieve that end will come to seem justifiable.

These critiques of Utopianism apply to a surprising number of dystopian worlds--not only the Marxist societies that were the subject of Popper's arguments, but the Sparta-like police-state described in Plato's Republic (a subject of different discussions by Popper!) as well as the fictional Oceania of Nineteen Eighty-Four. Indeed, encountering these dystopias, we may be struck by the fact that they are Utopian for at least some of their inhabitants (for instance, for those who directly benefit from the societal structures, i.e. those at the top of the ladder who are able to maintain power by controlling the thoughts and actions of others). And even some ostensibly Utopian fictions--for instance, Skinner's Walden Two--may easily come to seem distinctly dystopian when viewed from another angle--say, in the case of Walden Two, when viewed as something akin to what Oceania might feel like to its happier, Big-Brother-touting inhabitants. (Even if we dismiss this particular possibility for Walden Two, and claim that the society described in those pages actually is "perfect" in some sense, the fact would remain that it seems like an incredibly dull place to live, and thus would be Utopia only for a people with specific personalities....)

The fact that utopianism can so easily seem like the other side of the dystopian coin makes me wonder: is there a deeper attribute that all of these imagined societies share? I think the answer is yes--and that attribute is: these are societies entirely devoid of problems. In the case of the worlds that are supposed to look like utopias (Republic, Walden Two, and so forth), the problem-free environment is meant as a "feature"--one that will make life easy and pleasant for the inhabitants. For the worlds that are supposed to look like dystopias, the problem-free environment, tenaciously enforced through (in Orwell) torture, brainwashing, Newspeak, the rewriting of history, the erasure of truth, and so forth, is precisely what makes the fictional societies look so bleak. But, despite the different valence these and other authors give to their worlds, the underlying logic is the same. A superficial form of "contentment" is maintained by preventing people from seeing problems and trying to solve them. The enforced avoidance of problems may be well-intentioned, yet whatever the rulers' intentions, the picture that emerges is of a static society, in which growth, discovery, novelty, excitement, and further exploration and understanding are impossible. This underlying logic is why the very idea of Utopia as it is frequently understood is, in fact, so dark.

There is a twist. If dystopias (and dystopic "Utopias") aspire to be problem-free worlds, then it would follow that an actual Utopia--not a society simply called that, but a society that actually does allow for flourishing in its fullest sense--would be full of unsolved problems, which the inhabitants would pursue freely and as they pleased. This dovetails with Robert Nozick's view, described in the final part of Anarchy, State, and Utopia, of Utopia as not a single kind of place, but a "meta" world in which vast numbers of different societies and associations, each tailored to the different interests of different kinds of people, would emerge. Nozick's Utopia, too, is full of problems, both for those individual societies (whose task, he says, is partly to facilitate the discovery of what good societies might look like--thus implying that this is not yet known, and thus can't be perfectly instantiated even in principle) and for the meta-Utopia, whose problem will involve, for instance, the peaceful interactions and integrations of its constituent societies. Of course, most importantly, the problems that arise in such a meta-Utopia and its constituent Utopias would also exist on the level of individual inhabitants, who, unlike the inhabitants of the Republic, Oceania, Walden Two, and so many other "Utopian" worlds, would be free to imagine different, perhaps better worlds, and thus would be faced constantly with the personal problem of how to reshape their circumstances in pursuit of those dreams.

Of course, one cannot help but be struck by how decidedly unglamorous this account of Utopia sounds. Indeed, a "society full of problems, in which people go about trying to solve their problems" sounds not only unglamorous, but downright mundane. No wonder fictionalized false-Utopias and Dystopias seem to outnumber fictionalized real-Utopias.

However, here too, there is a twist. Yes, there are a handful of actually-optimistic Utopian fictions (some Le Guin comes to mind); but, there is also a sense in which most fiction is Utopian in the positive meaning of the term--that is, most fiction is about everyday people with lots of problems trying to do stuff that will make their lives better. In this view, the genuine champions of the Utopian vision are novelists like Jane Austen and George Eliot, whose characters may "rest in unvisited tombs," but who nonetheless show us a model of striving in which problems are present, but problem-solving is not thwarted by some oppressive autocrat. Indeed, perhaps this explains my abiding love of rom-com movies, whose plots tend to track the same processes. Rom-coms often highlight the problems and dissatisfactions of individual people, yet they also do so in a way that is fundamentally comic, and in which the overcoming of those problems is allowed even as success is never guaranteed.

Perhaps this is also why my intuitions about works like Così fan tutte or Into the Woods diverge from the intuitions of other listeners with whom I've compared notes. Many over the years have been tempted to see in these narratives a fall from grace--a loss of perfection and a confrontation with the tarnished reality of human life. And yes, that is one view: no character in these two works survives unscathed. But in both cases, what we witness is a process in which problems denied in Act I are recognized in Act II--and, in both cases, the end of Act II brings a loosening of authority and control, and with it a sense that, perhaps, problem-solving in the future will be possible for the characters. In Così, Don Alfonso steps back and allows the lovers to pursue their problem-filled lives without interference. Perhaps, after the end of the opera, the lovers' relationships will grow into authenticity. And in Into the Woods, the narrator is, literally, vanquished, as are the Witch's meddling, supernatural powers. What we are left with at the end is, in some sense, a fallen world. In another sense, however, it is a world that, for the first time in the musical, will allow its inhabitants to "just pursue [their] lives." That their strivings will never come to an end--that Cinderella begins to say "I wish..." even as the music stops--is itself a point of hope for the nascent, optimistic Utopia we see taking shape.

Monday, March 28, 2022

What's Wrong with Don Giovanni?

My colleague Patrick Hansen, director of Opera McGill, recently wrote a blog post discussing the challenges of producing Mozart's Don Giovanni - a retelling of the Don Juan story - during the era of #MeToo and other related social movements. Patrick makes the following points: 1) watching a serial seducer take advantage of women is no longer ok, though it might have seemed less obviously objectionable in previous centuries; 2) defending the character on the grounds that he sings beautiful music is also impossible; and 3) the quality of the opera's music overall is so high that the work cannot simply be jettisoned from the repertory.

Patrick's solution, demonstrated during this weekend's staged productions of the opera, was to set the story as a vampire tale, turning Giovanni into an actual monster who kills rather than sleeps with his victims. This strikes me as ingenious, for two reasons: first, it meets Patrick's explicit aim of forestalling objections to the nature of the story, by making it very clear that the production is in no way lionizing the actions of this character (after all, even if some old-fashioned types might be inclined to condone Don Giovanni's sexual exploits, none will praise him if he is a murderer rather than a seducer); and second, because it retains some elemental links with the character's sexual frenzies as depicted in the original plot. As is often pointed out, "undead" characters such as vampires embody aspects of Freud's conception of libido, which is both impossible to satisfy and impossible to kill. By setting the opera as a vampire story, Patrick is able to have it both ways, giving us a title character who is not-a-sexual-predator and yet still infused with many layers of archetypal sexual implications.

Although I appreciated and wholly support Patrick's solution to the "Don Giovanni problem," his discussion of the problem itself got me thinking. I'm always sad to hear people say that recent social or political movements have rendered an old and great artwork unsuitable for modern-day consumers - especially when the piece in question features such excellent music. So, in this case, I found myself wondering whether the opera is, in fact, as problematic as people often suggest it is. There are a number of viable arguments in favor of Don Giovanni - and, as far as I can tell, only one strong argument against it.

The argument against Don Giovanni is, in brief, that the mere depiction of a sexual predator renders the opera unsuitable for modern-day audiences. (Either on the grounds that some viewers might be triggered by them, or simply because depiction, even when ironic or skeptical, may be seen as a kind of approbation.) Many of my own arguments in support of the opera involve the idea that this work, or any, can show behavior without condoning it: that, in other words, the stance the artwork takes towards its own characters, plot, or moral content should structure the way we interpret and engage with that content. This, in turn, rests on the idea that the mere "facts" of the plot do not capture the full extent of what is ultimately being said - a proposition that I take to be largely self-evident, but that many do not. I appreciate that there are legitimate reasons people may wish to avoid an opera with this kind of plot. Although I disagree with them, I think this basically comes down to personal preferences, and I don't expect my arguments to change many minds. So, although I don't personally buy the depiction-is-bad-in-itself argument, I recognize that anything I say in support of the opera will seem a non-sequitur to someone who does buy it.

Nonetheless, here are some ways of thinking about the plot of Don Giovanni that make it seem less problematic than is often assumed. I'm not sure all of these are persuasive, but they should at least give us pause before we reject the original plot as being immoral.

1. Perhaps what is said about Giovanni, including Leporello's valorizing account of his exploits in the "Catalogue Aria," is simply false. To put it plainly, the first possibility is that Giovanni simply isn't a serial seducer. Consider the events of the opera. Over the two acts, we witness: a botched attempt to seduce Donna Anna (so botched that it culminates in a murder, which we need to assume is not Giovanni's normal strategy, since either the law or previous angry family members and their friends would have intervened in the past); a botched attempt to evade an angry ex; a botched attempt to seduce Zerlina (although it has been argued, including by me, that her imitation of his melody in their Act I duet is proof of her willingness - so perhaps this is the one successful seduction in the opera); a botched attempt to re-seduce Zerlina during the Act I Finale; and a botched attempt to exchange clothes with Leporello and seduce Elvira's maid.

If we are to believe what Leporello claims about his master during the Catalogue Aria, then Giovanni can't afford to have off-days like this one. If we take the day portrayed in the opera as a representative episode from the Don's life, then Leporello's account is false, and we'd be watching not an actual serial-seducer but simply an incompetent wannabe. I recognize that this conjecture is problematic, since it still leaves open the possibility that the characters, especially Leporello, think of serial seduction as a goal worth aspiring to. Though maybe this is softened by my next point:

2. Perhaps the opera itself condemns the Don. This seems to me the obvious choice in defending the opera: it's clear that although the plot depicts his (attempted) sexual conquests, in fact the opera is about his punishment. Indeed, "Il dissoluto punito" - "the dissolute man, punished" - was the title at the work's 1787 premiere, with "Don Giovanni" as the subtitle. On the level of plot, I think it's misleading to say that the opera depicts the actions of a serial seducer. More accurate is that what's on display is the intentions of a serial-seducer, plus the punishment meted out to the seducer. The musical structures confirm that this is, indeed, how we are meant to take in the work. The fact that Mozart introduces the statue's music as the first section of the overture is his statement that we are not watching, unbiased, as the Don pursues his various activities on stage, but rather watching with the knowledge of the supernatural censure in which his activities will result. Imagine if the Don Giovanni overture more closely resembled the Figaro overture, without any hint of the ombra music. Were this the case, the moral outlook of the opera would feel very different, since our starting-point would be in the less judgmental comic world, and we would watch the Don operate in a related, non-judgmental frame of mind. In reality, however, the opera introduces itself with the immediate announcement that what follows will be a story of judgment and damnation.

Of course, the fact that the Don is condemned by every other character, including ultimately by Leporello during Elvira's attempted intervention in the Act II Finale, also counts. Even Leporello's support throughout the opera, felt perhaps most keenly in the Catalogue Aria, is flimsy: the servant tries many times to denounce the master's lifestyle and quit his service, but is never allowed. The opera thus makes it clear that the Don's actions are bad both by terrestrial and celestial standards. Given that this point is so self-evident, I'm surprised that people who are on board with recent social movements haven't more enthusiastically embraced the opera, which, like Figaro, can be read plausibly as a statement of feminism avant la lettre.

3. Perhaps the opera is not really about sex. This final possibility may seem counterintuitive given...the actual literal contents of the libretto. But in much 19th-century criticism, including Kierkegaard's extended analysis of the opera in Either/Or, it is pointed out that the Don's sexual needs are exaggerated to the point of absurdity, and that perhaps the point of his character is to represent not sexuality, but rather the extreme limits of appetite as such - free of any particular impulse. (This reading also meshes nicely with Patrick's vampire theme, since with vampires, too, the fact of the appetite itself is far more salient than the particular need to which it is drawn.) Many authors have approached Don Giovanni from this angle. Nicholas Till, in Mozart and the Enlightenment, sees the piece as an essay both on Christianity and early theories of liberalism, particularly given the paean to freedom in the Act I Finale (Viva la libertà!). Karol Berger, too (in Bach's Cycle, Mozart's Arrow) sees it as a tract on freedom, on transgression, on politics, on the nature of individuals vs collectives in society. Indeed, so much does Berger take it for granted that sex is of no real importance to the opera's meaning, that he spends a chapter likening Giovanni to Faustus, a character for whom the pursuit of knowledge rather than physical pleasure is the abstracted, undead drive. Others, meanwhile (most famously Wendy Allanbrook) liken Giovanni to an Odyssean "No-Man": a symbol rather than a human figure. In all of these readings, even where the authors diverge on particulars, we find a shared conviction that Giovanni is not so much a sexual predator as a transgressor of normative moral values, and that sex simply serves as the plot-device through which Mozart and Da Ponte explore these bigger societal and human questions.

If listeners find at least one of these readings to be plausible, then the opera deserves to be accepted on its own terms, even with stagings that depict the actions described in the original libretto. At best, detractors who think that #MeToo poses a fatal problem for Don Giovanni should find that the opera takes an anti-Giovanni position and defends modern-day social values. And those motivated not specifically by #MeToo but by broader moral concerns should find that the opera's condemnation of this character is decisive and unambiguous.

Wednesday, January 26, 2022

Reading the Foreword to "Ungrading"

Following my "Experiments in Grading" post of a few weeks ago, a colleague suggested that I peruse the newish volume Ungrading (ed. Susan Blum; West Virginia University Press, 2020). I haven't yet gotten far in the collection, and in any case I don't expect to read the book cover-to-cover, since not all of the topics are equally applicable to my interests or pedagogical problems. However, I had some reactions while reading the Foreword by Alfie Kohn. I'll record a few of these thoughts here, and follow up with future posts as I read more of the book. (I'll also continue to edit and expand my earlier post on grading.)

In general, in the Foreword (and, from what I can tell, perhaps the rest of the volume as well), the case against grades is vastly overstated. Kohn describes nine steps along the path from grading to ungrading--and although I have followed some parts of this path myself, my underlying rationale is almost always very different from the reasons stated here.

Kohn's first step:

We start by worrying about grade inflation before gradually coming to realize the real problem is grades themselves.

I certainly have worried at times about grade inflation...but what about the second part of that sentence? I would frame the issue not as "the real problem is grades themselves," but rather as "grades are problematic." In my previous post, I outlined a number of reasons why I think that grades are problematic, mostly involving the conceptual overburdening of grades, and attendant practical issues that arise as a result. I stand by this, yet it doesn't necessarily follow that grades themselves are "the problem." Grades do accomplish at least some useful purposes, including signaling. Indeed, if Bryan Caplan is correct that signaling is the overarching aim of academia and schooling, then some form of grades are inevitable, since the system wouldn't be able to function unless something facilitated this basic end. Of course, as I also acknowledge elsewhere, to the extent that grades serve as signals, they are imperfect signals, not least because they don't usually carry any explanatory detail. To outsiders who want to know whether a student is competent, grades only indicate the barest outlines of an answer; and to students receiving grades without an accompanying explanation, no real learning can take place because the feedback implied by grades is too diffuse to provoke any particular improvements. But, be that as it may, some version of this problem will be replicated in virtually any system for signaling quality that goes short of actually describing, in real detail, individual skill-sets. For this reason, some system like grades is probably a necessary part of life. It isn't clear that this system or any other is itself "the problem".

Kohn's second step:

...We make sure that everyone can, in theory, get an A. Only then do we realize that rating, too, is a problem, even if a less egregious one than ranking. We've eliminated the strychnine of competition, but there is more to be done if we're still dispensing the arsenic of extrinsic motivation.

This is an extension of the previous point: ratings are inevitable. But, I think it must be added, they are in some cases beneficial. The only way to improve is to receive feedback on one's abilities and performance, and although I agree that grades aren't the best way to deliver feedback (an objection I raised earlier), it isn't the fact of rating that is the problem, but the fact that the system of rating involving grades ends up, in its purest form, delivering ratings without explanations. The solution here could be very robust rubrics, or detailed commentary. About extrinsic motivation, this doesn't strike me as being obviously problematic. Students are individuals, and as such will be motivated by any number of different considerations; who am I to tell them that only intrinsic motivations matter?

Kohn's third step:

...We stop using letters and numbers to rate what students have done and instead use descriptive labels such as "needs improvement," "developing," "meeting/exceeding expectations," "proficient," and "mastery." Step two: we realize these labels are just grades...by a different name and that we need to get rid of them too.

This is the first point at which my disagreement becomes serious. Such labels are not grades by a different name; they are partial explanations of why a student has succeeded or failed at a particular task, and in what ways. Unlike a straight-up letter grade delivered without accompanying commentary, these explanations do, or at least can, make clear what needs work, and what kind of work would be best. They aren't perfect, of course; and crucially, they leave room for imprecision that can, in the wrong pedagogical hands, end up being just as useless to students as grades themselves often are. But what would remain of education if we got rid of the possibility of telling a student that work needs improvement?

(Incidentally, I also find labels such as these useful when they are applied by editors to my own professional work. One of my own intuitive measures of how useful a particular comment might be to a student is my sense of how helpful the comment would be if an editor voiced it while working through one of my manuscripts. I want to know what doesn't work, what could be better, and, ideally, what steps I might take to make the necessary improvements. Likewise, I assume that students at least want to have the option of hearing similar thoughts from me--and these comments are a baby-step in the right direction, even if they do not end up saying enough.)

I agreed with Kohn's fourth through seventh steps, which advocate for more precision and description in feedback to students--though, again, not because I think grades themselves are bad (which is Kohn's ultimate point) but because I think they are often overburdened or poorly applied, and one of the ways to remedy this is to be more explicit about what the grades are supposed to accomplish.

However, in step 8 we encounter "ungrading" in its most usual form: "we meet with students individually and ask them to propose course grades for themselves, while reserving the right to accept their suggestions." Step 9 is simply an intensified version of step 8: students choose their own final grades, and we reserve no right of refusal.

One problem with this approach is that it places undue psychological pressure on the students. Students find themselves in the position of second-guessing what we professors think about them, and trying to weigh this against what they then think we might think when we see the grade they propose...and I imagine that such mental calculations are ultimately pretty harmful--more harmful, certainly, than what I think they experience when I just go ahead and give them a (probably decent) grade along with some constructive feedback.

Another problem is an extension of what I have already mentioned. Grades do serve a purpose in the culture as a whole (even if mainly for outward-facing signaling), and to strip them of that purpose--which would surely occur if every prof left grading entirely in the hands of students, since external bodies would no longer think of grades as a reliably "objective" standard--would most likely lead not to the abandonment of rating systems altogether, but simply to a new kind of rating system: a different version of the same thing...and here I would invoke the classic argument from conservatism. Why fly to ills we know not of? When you don't yet know what the new system will be like or what further problems it may bring, it's important to make changes that are cautiously incremental rather than radical. It should follow that the best way to deal with the problems posed by grading as it's currently practiced is not to tinker with the culturally-shared elements of the grading system, but to experimentally tweak those aspects that can be safely adjusted within the comparatively private context of one's own syllabuses.

Sunday, January 16, 2022

The post formerly known as: Is "Bad Music Love" Equivalent to "Bad Movie Love"?

In mid-January 2022, I wrote a somewhat informal blog-review of Matthew Strohl's excellent, recent book, Why It's OK to Love Bad Movies. As it happens, I ended up reviewing the book a few months later for the Journal of Aesthetics and Art Criticism - and once I had signed the journal's publishing agreement, I needed to remove the blog post, which overlaps a great deal with my review. When the review is officially published, I will place a link on this page; however, in the meantime, I have deleted any passages that made it into my review...and am leaving whatever passages are unique to the blog post. Although the material that remains is probably fairly impossible to understand, I wanted to preserve these thoughts, since they seem interesting and relevant to other ideas (even if they remain in a somewhat incomprehensible state when isolated like this).

---

The best book I have read so far this year is Matthew Strohl's recently released Why It's OK to Love Bad Movies. Granted, the year has only just begun; yet I think this is probably the best book on the philosophy of art I've read in a long time. The writing is spectacular; the arguments are convincing; and most of all, the sheer love of film conveyed throughout the book is wonderful. Strohl accomplishes in a breezy 194 pages what many other philosophers require many hundreds of pages to do.

Strohl's book is ostensibly about movies; but in fact, it is a defense of the good life. The argument he builds is, initially, focused only on film. He begins by defining a stance of aesthetic appreciation for bad movies, termed Bad Movie Love. It goes like this. When saying that a movie is "so bad it's good,"

"'Good' is being used in the final sense while 'bad' has a special meaning. ...One recognizes that there is some limited sense in which the movie is bad, but...one ultimately judges it to be aesthetically valuable, in part because it's bad in this limited sense." (p.4)

This is a fine place to start, though at first the idea seems like it might veer into question-begging, since even if a bad movie is judged aesthetically valuable, it isn't initially clear why we'd want to dwell on bad movies when there are so many good movies around. I agree with Strohl that Batman and Robin is "so bad it's good"; but for a decent portion of the book I don't yet see why such a film is more worthy of my time than Vertigo, which, let's face it, is "so good it's good". (Or In the Mood for Love, or Adaptation, or Mad Men, or any of the other good-good things I've watched or re-watched recently.)

....

I'm fully convinced by this. I now feel personally liberated to enjoy some of my guilty-pleasure films. Indeed, I'm not only convinced by the arguments, but inspired by them.

It's natural to ask, though, particularly in light of my other interests, whether these arguments apply equally to other areas of aesthetic pursuit. Being both a musician and an avid reader, I can't help but wonder whether Bad Music Love (or Bad Novel Love) is as permissible as Bad Movie Love.

My guess is that, at least in the case of music, the answer is: no. Bad Music Love might be far, far worse than Bad Movie Love. Why? First, the disclaimer: I don't think that I'm biased by being a musicologist. True, my professional work demands that I have discerning musical tastes, and do I spend a lot of time trying to articulate why various compositions are good or bad. And I recognize that my general disposition may seem to place me in the same category as Strohl's imaginary "Professor Stuffypants". But I don't think these biases impinge on my reasoning about this particular point (though my friends might say otherwise...).

My hypothesis involves both the relative quantity of bad music vs. bad movies, and the relative quality of bad music vs. bad movies. I suspect that there is far more bad music in the world than there are bad movies, and that the bad music is infinitely badder than the bad movies.

...

I could write a bad piano sonata today, alone in my apartment, with no money or resources beyond some paper and a pencil. (I don't even need an eraser! This is supposed to be bad; why bother revising?) I would have a harder time making a bad movie. Perhaps I could pull it off with my phone and a selfie-stick...but this would not be the kind of bad movie Strohl would watch, since it would have no reliable distribution, and thus would be unlikely to make its way to his TV. This question of distribution raises another point: it isn't only that bad movies take more time and effort to make than bad music, but that their chances of being preserved and distributed is relatively low. Anyone can write bad music on a manuscript leaf and find that, a couple hundred years later, it will be digitized in the Duben Collection or in the Dresden State Library, awaiting discovery by zealous archivist period-performers who want to play bad music; but not so for bad movies.

The barriers to the distribution of bad movies are not the only salient considerations. Movies (even bad ones) require more people to make than music. This also has its effect. Except in some exceptional cases in which a bad filmmaker has unchecked power (cf. Strohl's discussion of Plan 9 from Outer Space), most movies are made with the involvement and input of more people. This alone almost guarantees the presence of error-correcting mechanisms, since even a bad director may collaborate with people who end up improving the final product. For this reason, I suspect that even most "bad movies" aren't really all that bad--a suspicion corroborated by Strohl's book, which argues that many of these movies, even the Twilight films, are actually rather good. Bad music, on the other hand, can be profoundly bad. Often, no external ear has been engaged to criticize and correct the final product; it is the composer's own intuitions, errors and all, that features in what we end up hearing. (Incidentally, this has changed in the modern era of pop music, where songs are very often written by a performer in collaboration with others behind the scenes...and it's probably for this reason that the percentage of competent pop songs I encounter is far higher than the percentage of competent non-Mozart/non-Haydn 18th-century symphonies I encounter.)

Does any of this change what I make of Strohl's arguments? I suspect so. Perhaps his defense of bad movies is not really a compelling defense of bad movies, but a defense of medium-bad or even pretty-good-but-not-great movies. The movies Strohl discusses are, ultimately still worth our time. We may not be in the mood to ingest Vertigo (just as we may not be in the mood to follow Bach fugues or parse a Schoenberg piano sonata); but in Strohl's hands many movies that seem superficially bad can still contribute much to our life. This is more than I would say of most musical compositions I encounter.

...

Perhaps the saving grace in the case of novels is that, as with film, books require the involvement of multiple people along the path from manuscript to published edition, and thus present opportunities for improvement and error-correction. Much of the art-music produced over the past 700 years survives only in manuscript, or in early published editions, and is thus hit-or-miss in a way that may not apply to published novels or professionally distributed films. Of course, according to Sturgeon's Law, most work in any given domain is bad--and this applies everywhere, including to published novels and professional films. But the error-correcting processes in these media nonetheless seem to function reasonably well in improving overall quality.

One final--contentious--possible explanation for the general high quality of film in comparison with older music and older texts is that aesthetic standards overall have improved over the past century, and thus that art-forms invented more recently have the benefit of having developed within a context of greater artistic and aesthetic understanding. The fact that composers like Bach, Mozart, or Beethoven figured out how to write reliably great music during the 18th century is, when you think about it, completely remarkable considering just how little progress had yet been made in many other areas of cultural, technological, scientific, and philosophical life. That they did succeed where so many of their contemporaries failed is testament both to whatever progress had occurred during and just before their lifetimes; and to the intellectual labor each one of these artists did in improving aesthetic knowledge. (There was probably also a lot of luck involved.) Yet as more general cultural and scientific knowledge improved, so too did artistic standards: thus, my experience suggests that there is a very high likelihood that an unknown, non-canonical piece of music written after around 1850 will be pretty decent; whereas I can't say the same for the period 1750-1800. That film was invented in the 20th century may be at least a partial explanation for the fact that, as Strohl shows, even bad movies can be aesthetically valuable.

Monday, January 3, 2022

Experiments in Grading

My transition into professional academia has, for the most part, been very smooth. The "three pillars" of academic work - research, teaching, and service - all involve activities that I fundamentally enjoy; and, as a result, my job requires of me few tasks that seem onerous. Yet one point of consistent confusion and perplexity during my first few years as a professor has been...grading.

Before going to college, I thought a lot about grades: not philosophically, but in the more mundane sense that I was always worried about what my transcript would look like, and what effect this might have on my future. When I was an undergrad, and I could exert a bit more control over the selection of courses in which I enrolled, my grades improved. I stopped worrying, and indeed I stopped thinking about grades altogether. This changed a bit when I started grad school in the UK, but only because the cultural practices of grading differ so much across the pond. I went from receiving routine A's to receiving routine 70s, and it took a bit of adjusting for me to realize that this wasn't actually a massive step down. I stayed in the UK for my PhD, and thoughts of grades once again vanished from my life.

But now, as a professor, I have to deal constantly with grades. I spend the beginning of each term instituting grading scales and points-schemes that (I hope) will incentivize students to participate in certain ways in my courses; I spend the bulk of each term experiencing in real-time the results of such schemes; I spend many hours as the term progresses assigning grades; and then I spend the final portion of each term calculating course grades and dealing with grade submission, extensions, and other administrative joys.

But in addition to the fact that I am forced to think about grades through constant exposure to activities related to grading, I am also inclined to think about grades for other reasons. The most pressing reason is ethical, as I consider my relationship with the students in my courses and the impact that my grading practices will have on their futures. Another reason, more practical, is that some casual relationship exists between the structures and grading schemes I institute and the experience of my courses--and I would like to make those course-experiences as positive as possible for everyone, including myself.

All of this has led me to experiment a fair amount with different styles of grading in my courses. Now, as the Winter 2022 semester approaches and I put the finishing touches on yet another syllabus, I decided to reflect about some of these experiments, from previous terms and the coming term.


Initial reflections: what do grades accomplish?

Although this is not the place for me to sketch a comprehensive theory of grading, there are a few premises on which all of my experiments depend. Some of these have little to do with grades as such: for instance, I believe that students are people, that they are as intelligent and rational as I am, and that they are the best judges--certainly better than I could be--of how their own time should be used. I also believe that students are individuals and that learning takes place through creative processes that occur within individual minds. These background beliefs, broadly libertarian, lead me to avoid exams or tests, or any other method of assessment that would require all of the students to do the same activity, answer the same questions, or "prove" that they've acquired the same bit of knowledge. Instead, I try to minimize course requirements and come up instead with assignments that offer the highest possible degree of flexibility. (This is delicate, because flexibility for the students often translates into administrative demands on me--and one corollary to my belief that the students are people is that professors, too, are people, and thus deserving of flexibility and freedom.)

Other premises are explicitly about grades themselves: for instance, that grading as it is generally practiced often seeks to accomplish multiple different (and often conflicting) aims, including 1) signaling students' level of accomplishment or competence to the outside world; 2) providing direct feedback to the students; 3) ranking students into hierarchies of achievement; and 4) enforcing disciplinary practices. It seems intuitively obvious to me that these four aims should not all be bundled together; thus, many of my experiments and explorations are aimed at un-bundling them, and attempting to find ways to accomplish each of those aims (to whatever extent I wish to accomplish them) more coherently, explicitly, and fairly.

Of the four uses of grades, the one in which I could find the least wiggle-room was (1) signaling students' accomplishments and competency to the outside world. Grading practices are culture-wide, and if I alone decided to use grades in a radically unique way, this would certainly be misunderstood by future admissions committees and outside evaluative bodies, and this would have negative consequences for my students. By contrast, (2) and (4) could be accomplished within my courses using different means (say, by giving substantive verbal feedback).


Undergraduate grading

Thus, my first order of business when I began to design courses was to lower the burden on grades. I began by removing disciplinary measures from my grading schemes. When I was a student, it was normal for late work to lose 10% of its mark for each day (or hour, or any other arbitrary unit of time) that passed after the due-date. Ditto for errors of spacing, formatting, font-size, and so forth. I did away with this, announcing in my syllabuses that grades would reflect only the content of assignments, not the circumstances of their submission.

This meant, in practice, that I had done away with my most effective means for enforcing deadlines. In all but one of my courses, this has been a success, and students did not take advantage of my flexibility. Of course, some students submitted their work very late--but allowance for late submissions is part of my being convinced that the students understand their own interests and schedules better than I do. This flexibility became a problem only in the large (60-student) course I taught in Fall 2021, when a very high volume of late submissions made it difficult for me and the TAs to keep up with grading. I will teach this course again in Fall 2022, and one solution may be to set a final deadline for late submissions: for instance, to say that I will not use grades to enforce deadlines, but that any student who wants late work to be accepted at all must make arrangements with me beforehand. This way I preserve the flexibility (since I would plan to accept all requests for late work) but would prevent students from taking advantage thoughtlessly.

Another experiment, instituted early on, was to do away with grades-as-feedback. In principle, this is simple: just write detailed feedback for each submission. In practice, it is more complex, because it leaves open the question of how to assign grades. My solution was to think of each assignment as being pass-fail (which, incidentally, is how things work in the real world of professional research, where articles either get published or they don't). I approached this (in a 30-student undergrad course) by assigning higher-than-I-would-otherwise-give marks to passable papers (everyone with a decent paper got a B+ or higher), writing extensive feedback to each student, and assigning lower-than-I-otherwise-would marks for bad papers, offering the authors a chance to revise and resubmit. This course, taught online in Fall 2020, was perhaps my most successful experiment in grading: the students uniformly stopped worrying about their grades and started thinking about their ideas--and the result was that the papers became more interesting, more daring, more fun to read, and better as the term went on. Although I had to do a bit of extra work when students resubmitted their papers, it was well worth it..and the number of revisions I requested decreased hugely over the term. (I will also add that for this course, the paper prompts were very open-ended, meaning the students were free to find ideas that interested them and write about this. This also made the papers more enjoyable to grade, since in the best cases it stimulated the students' creativity, and in the worst cases it offered a minimum of at least some variety among the submissions.)

This approach has been a success in approximately five out of the six courses in which I attempted it. The course in which it was least successful was, as mentioned previously, a 60-person course, in which the comparative flexibility resulted in some submission-related chaos as the term progressed. Another potential problem in that course was that the students were caught off-guard by my insistence upon letting them follow their own interests when crafting writing assignments. Some didn't understand how to select a topic, and were reluctant to ask for help; others used the lack of disciplinary grading as an excuse to ignore the deadlines and let work pile up over the term. Others, of course, thrived, and wrote a series of brilliant and incisive papers. I don't count this experiment as a complete failure, since many students did ultimately report that they enjoyed the course and the assignment structures; however, I will modify the policies before I teach this course again.


Grading in graduate seminars

As of January 2022, I've taught three graduate seminars as McGill, and will be teaching my fourth during the coming term. The challenges presented by grading in grad seminars are different from those I encountered while teaching undergrads. On the one hand, graduate students generally receive As in seminars (or at worst, A-); thus, grading is relatively unimportant as signal, feedback, or ranking. On the other hand, the point structures and nature of assignments in graduate seminars still influence the course experience, and in some cases matter even more than they do in undergraduate courses, since so much of the professional initiation that occurs in grad seminars depends on individual reading and writing practices.

I've generally been bolder about experimenting with grading systems in grad seminars than in undergrad courses: first, because, as I've already said, everyone ultimately gets an A, which means the stakes are lower if my experiment goes awry; and second, because if things do go awry, grad students are more likely than undergrads to be sympathetic and professionally interested when I lay out the rationale for my experimentation.

For the first two grad seminars I taught, my grading experimentation (such as it was) involved similar parameters to those used in my undergrad courses: I didn't use grades to enforce deadlines, but I nonetheless scheduled assignments, gave each a point-value, and demanded that the students complete the work for credit in the course.

Grading, scoring, and board games

In Winter 2021, however, I began to experiment more radically. For my seminar that term, I hit upon the idea (stimulated by reading Thi Nguyen's new book Games: Agency as Art) of developing a points-based grading system resembling the scoring systems of board games. In general, in many of the board games I enjoy, players win by accumulating points--but the specific ways in which they accumulate points are left largely up to them. In the Ticket to Ride games (favorites while I was a PhD student), you can accumulate points by completing "routes", or by laying track aimlessly, or by building stations, etc. Each player works largely independently (though within a structure that emerges as a result of the other players' actions), accumulating points in any way and at any pace that seems advantageous.

I tried applying this ideal to my course. Rather than establish a series of required assignments, I offered a menu of possible assignments, each of which was attached to a point value. Give an in-class presentation for 3 points; write a short argumentative opinion piece and distribute it to the class for 4 points; write a response to an in-class colleague for 4 points; write a term-paper for 10 points; and so forth. I imposed no deadlines, but simply told the students that they needed to accumulate 29 points by the end of the term in order to receive an A in the course. They could write three term papers; they could write one term paper and give seven in-class presentations; etc. I created a simple spreadsheet to keep track of the students' scores as they wracked up points...and the course's grading scheme became a board game.

As with any radical experiment within a course, the results were very mixed. Some students thrived on the points system, since they were able to organize their time more thoughtfully (for instance, they could sign up for presentations or submit written work during weeks when they didn't have lots of work to do for their other seminars where stricter deadlines were imposed). For other students, the points system provided too much freedom, and left them feeling unmotivated, since they didn't have deadlines to work towards. I have not yet hit upon a solution to this problem. It is not clear whether the right approach is to try the board-game system again, and simply present it in a different way at the beginning of the term, to prepare students to succeed with it, or whether the system itself must be tweaked to provide a slightly more rigid framework.

Winter 2022: writing, research, and teaching skills

For the coming term, I have split the coursework into two different categories. One category, involving in-class presentations, responses, and other standalone assignments, will follow the board-game system, with no due dates, and the expectation that the students will accumulate some quantity of points by the end of the term. (More about this category below.) The other category, culminating in the submission of a term-paper, will introduce a new system, inspired by the writing habits that I hope to help the students develop. Rather than leave a term paper until the end of the semester, I have broken down the paper into individual components (thesis statements, outline, introductory section, body paragraphs, etc), which the students will complete progressively as the term unfolds.

By distributing this writing project over the entire term, I hope to both instill good writing habits in the participants and give them frequent but low-stakes goals to meet, so that the lack of deadlines in the board-game components of the course will not be demoralizing. I hope, if all goes well, that this will help to quell my own existential confusion about the nature of grad seminars and what exactly students are meant to take away from these courses. Having spent my entire graduate-student career in the UK, I never took a seminar during my PhD studies; thus, I lack recent models for how such courses can be taught and what they are intended to convey. In my other seminars, the discussions have been interesting (to me, at least), but I've never quite known why we should discuss these topics, particularly when the topics or the reading lists follow my own idiosyncratic interests rather than the research programs of the students. I recognize that there is an element of intellectual apprenticeship to the grad seminar: the students come to understand how I see my way through an academic issue, or how I compile a reading list, and perhaps, through a bit of effort along the way, might develop skills resembling mine.

In the current term's seminar, the freestanding assignments involving presentations and readings have also been tweaked slightly in the hopes of increasing their practical utility. The writing assignment is designed to instill good habits; but I have angled the other projects, too, towards professional skills. Rather than coming up with an exhaustive reading list, I have left open slots on the reading list which the students themselves can (for points, of course) fill in. If all goes well, this will give them a chance to exercise the skills of hunting down sources and navigating the dual constraints of finding sources that match their own interests and that match other scholars' (i.e. their classmates') interests. The in-class presentations, too, will not be devoted simply to the summarizing of texts and analysis of arguments, but to presentations of the kind they might someday give in an upper-level undergraduate course. This will allow them to test out pedagogical styles while also engaging with a range of scholarly sources.

I will continue to reflect on this system's success (and the challenges it presents) as the term continues! One major variable is the fact that, owing to the new Covid surge, the first few class sessions will be moved online. My Winter 2021 seminar was held over Zoom, and I found the online format to be a challenge, given that it sometimes impedes free-flowing discussion.

Thursday, December 16, 2021

Franz Joseph Macdonald

Update, April 1, 2022 - I posted a slightly revised version of this essay to my blog at violinist.com


Like many others, I spent the early Fall of this year watching videos of the late Norm Macconald--and, also like many others, I had to wait until the end of the semester before finding the time to muse about the many facets of his virtuosity. Norm's talents are already widely recognized: his daring selection of subject-matter, his deadpan delivery, the drawn-out jokes that manage to ramble endlessly while still remaining coherent. Yet I am also impressed by the degree of self-reflection and thoughtfulness with which he could discuss the technical parameters of joke-telling: for instance, in this video in which he describes the perfect joke as being one in which the setup and punchline are identical or almost identical:


In a blog post, philosopher Eric Schliesser makes the lateral connection between this theory of the "perfect joke" and academic philosophy, a field in which an argument of the form "A=A" can itself be a source of humor. Schliesser speculates that, for some thinkers (he names Wittgenstein), this idea of identity-as-comedy must have been old hat; but of course the same can be said of others as well, including Slavoj Žižek, whose slim volume of jokes includes plenty that conform to this model.

This particular way of conceiving the structure of a joke--through the identity of "A=A"--is, in its abstract formalism, philosophical through-and-through. Yet various connections with music also suggest themselves. Music as an art deals regularly in humor and wit; it is also necessarily formalistic, since musical utterances accumulate meaning not through their semantic content, but as a result of the forms in which they are laid out. And, owing to its formalistic qualities, music is an art of tightly controlled repetition.

The most familiar way in which repetition features in music is through the formal trope of the recapitulation: that simultaneous return of the global tonic and the first theme, the moment that gives purpose to the trajectory of Sonata Form. Yet the kind of repetition I have in mind here as the basis for A=A-style humor occurs on a much smaller scale, through the repeating motifs that give any piece of music, in any genre, its coherence. (These repetitions can be exact, or they can be slightly altered; for instance, in Mozart's famous "easy" Piano Sonata K545, the opening rhythm in bars 1-2 repeats, with different pitches and chords, in bars 3-4, giving the impression of a consistent, paired statement-and-response.) Much of this small-scale repetition is made possible by the modularity of music. Phrases are made up of small, separable chunks, and these can be repeated, shifted around, combined, recombined, and otherwise mutated as a piece unfolds.

Much musical humor arises through the deft repetition of such chunks. One particularly famous instance of wit in music is Haydn's "Joke" Quartet, op. 33 no. 2, whose last movement presents a series of small modules in the first phrase:

Module A, in this case, occupies the first two bars. Bars 3-4 present an altered version of A, with a slightly varied melody and a harmony 'on' the dominant; and bars 5ff present yet another altered version. The eponymous "joke" is that A contains a melody suggestive of the beginning of a movement but a bassline suggestive of cadential closure; thus, in the first two bars, Haydn uses A to begin the movement, but in the final moments he uses A for the ending as well:


The humorous effect is helped along by the grand pauses as well, which prevent unsuspecting listeners from knowing quite where the piece ends. One possible end comes in bar 166, where the first theme also ended. But Haydn extends the movement, leaving a gap of another three bars, and then presenting A--which, though it sounds like the start of something new, also provides a satisfying harmonic ending to the work. In this case, the joke is, quite literally, that A=A: that the setup and punchline are contained in equal measure in the kernel with which the movement begins.

Another, similar example of the same kind of humor occurs in the last movement of Mozart's Symphony no. 39, when the gesture presented at the beginning as an "opening" module is reused in the final bar. Here, as in Haydn, the same calculation goes on in hindsight: we realize that the gesture traces a descent from scale-degree 5 to 1, and that it can therefore double as a perfectly good closing module, even though in its first appearance this motion is hidden because of the rhythmic underlay.

Jokes of the form A=A, musical or not, seem to be worlds away from other modes of humor explored by Norm Macdonald. Yet extensions and variations of this strategy can be located throughout his corpus, as well as in some musical works. A staple of his humor, for instance, is the long, rambling joke that ends with a silly (often inane) punchline. The celebrated "Moth" joke is exemplary:

There are many respects in which the narrative strategy at work in this joke can be related to musical compositions. For instance, much of Norm's delivery involves "playing dumb": he pretends to be inarticulate or otherwise unable to follow the joke's narrative thread, the better to make a heroic rescue when, at the end, he delivers the unexpected punchline. This technique of feigning incompetence has a precursor in many 18th-century musical compositions in which the simulation of improvisation (as in written-out cadenzas, fantasias, and capriccios) would be more theatrically convincing if the player or composer used haphazard structures or modulations. These would seem to be incompatible with careful, compositional planning, and would help give listeners the impression that improvisation was taking place. Haydn's Capriccio in C major--both an exemplar of musical humor and a masterpiece of simulated improvisation--shows both techniques in action. The work alternates tightly-structured motivic elaborations with passages of blatantly un-melodic writing, haphazard modulations, and parenthetical digressions that are neither properly begun nor properly ended. The performer, as a result, seems to be following whims of the moment rather than a prefabricated compositional plan:

In both Haydn's Capriccio and Norm's Moth joke, the formal plan of A=A is still present, hidden within the protracted structure. Both narratives elevate bloatedness to the level of a humorous trope. In the moth joke, Norm constructs an extended parenthetical about the moth's personal difficulties; and in Haydn's Capriccio, the harmonic plan follows a number of extended digressions into distant tonal areas. Both end with an obvious, even banal, return to the subject-matter with which they began--and it is in this sense that their overarching paths still follow the plan of A=A: or, more correctly, of A.......=A. As soon as Norm says "a moth goes into X", a punchline involving "attracted to the light" seems inevitable. And likewise, built into Haydn's musical vernacular is the requirement that a composition end with an emphatic assertion of the tonic. (In the case of this particular work, the melody begins G-C and thus prefigures the V-I motion of the last two chords.) Both works complicate the path from the first phrase to the last; yet both nonetheless derive a large part of their humor from the way they flirt with incoherence while nonetheless eventually regaining their narrative thread.

Of course, A=A is not the entire story, in either Haydn's or Macdonald's comedy. Even if utterances that take this form are possibly funny, it seems clear that they are not necessarily funny. One question, for instance, is how much the delivery contributes to all of this. Is it in the text, or in the performance, that the essence of the joke is to be found?

In both cases, delivery is certainly part of the package. It seems obvious in Norm's case that we're laughing not only at what is said, but at the way it is said, including his body-language and stage deportment both before and after the joke. (Before: setting it up as being someone else's idea [playing dumb, again]; and after: glaring at Conan, trying to gauge the reaction--or, in other cases, quietly chuckling at the jokes himself.)

The importance of performance is perhaps less self-evident in the case of Haydn, but it is nonetheless equally relevant. In both the Quartet and the Capriccio, Haydn weaves performative concerns into the structure of the composition. In the quartet, the extended pauses are integral parts of the joke--both because they help to "disentangle" the modules, allowing us to hear more clearly the I-V-I motion of A when it is used as the final utterance, and because they help to build uncertainty as to when the movement actually ends. In the Capriccio, Haydn instructs the pianist to sustain some notes (particularly those immediately preceding audacious modulations) until the sound has naturally faded away before moving on to the next segment. Both of these techniques--the grand pauses and the protracted sustains--are meant to slow down the performer: perhaps to simulate the act of thinking, of stalling for time, of searching for the next word. Both Haydn and Norm prefer to stumble, fail, and splutter as they seem to lose themselves in a narrative loop. Yet both ultimately arrive back again at A.

Mozart's humor, in the 39th Symphony and the various other works in which he mixes up opening and closing modules or plays dumb, is geared less towards performance and more towards abstract elements of the compositional language. Why? We can't be certain, but one possibility is that he is more interested in subversion than straightforward joke-telling. Haydn's jokes make uproarious humor a feature: they want to be noticed by listeners. Although Mozart tells his share of blatant jokes (most famously in A Musical Joke, K.522), his humor is generally far subtler, often leaving us wondering whether what we just heard was a joke at all. When he wants to maximize this ambiguity, the best way to do it is not to implicate performative decisions, but allow the players to keep a straight face while the musical grammar seems to unravel.

Tuesday, August 17, 2021

New article: 17th-century German violin technique

 My new article has just been published in Early Music, and is available on the OUP website!

Link: "Violin technique and the contrapuntal imagination in 17th-century German lands"

Because the pandemic introduced so many difficulties for reviewers and publishers, this article was very slow to move through the pipeline. In fact, although it was only published a few weeks ago, it was the first thing I wrote after I finished my PhD in 2019. I wanted a wee break from writing about Mozart, so I turned to one of the other topics I've spent a lot of time thinking about. (And indeed, some of the observations that wound up in the article originated in posts here, way back in the first iteration of my attempts as a blogger, c.2014-2015!) I also wanted to explore a few ideas about evidence, explanation, and epistemology, so I tried to angle the topic in such a way that it wouldn't just be about violin technique, but something far more elusive and speculative: the imaginative structures that arise in the player's mind when an instrument is used in a particular way.

Wednesday, June 3, 2020

Some Thoughts on History, Ontology, and the Work Concept

I read a version of this text at the conference "Making Musical Works in Early Modern Europe" (London, June 2019). My goal was to provoke rather than simply present an argument: I wanted to express frustration both at the general urge among philosophers to define things (the impulse on which ontological enquires are based), and at the urge among some historical musicologists to take those definitions too seriously. The opening of the essay is focused on some other presentations in the conference; in the second half I move on to discuss broader problems.

***

According to the official programme for this conference, the title of my offering is 'The Challenges of Ontology'. It’s a nice title, and in some ways it encapsulates what I’m going to say; however, it isn’t the title I submitted to the organizers of the conference. My actual title is: 'Who needs an ontology of the musical work?'

It isn’t difficult to see why this title was deemed unacceptable. As a rhetorical question, it is provocative — perhaps even glib in its implied dismissal of an entire musical-philosophical enterprise. However, my intention in posing such a question was not to undercut the value of that enterprise. Today’s discussions have been very stimulating, and many of the topics addressed, both here and in the wider literature, carry important ramifications: musical, historical, and, more broadly, cultural.

Yet, at the same time, my enthusiasm is tempered by a concern that we may be thinking about ‘musical works’ in the wrong way. For one thing, we seem to be conceiving of work-hood as something that a piece of music either does or does not have. For instance, earlier today [a participant] took issue with Lydia Goehr’s apparent contention that Bach ‘did not’ create musical works. (As it happens Goehr's argument is more nuanced than the version presented to us earlier today.) The participant claimed that in fact Bach and his contemporaries (not to mention forebears!) did, in fact, create musical works. There’s a kind of binary absolutism behind his thinking: Bach either ‘did’ or ‘did not’ create musical works. [Another participant], too — though she offered a wonderfully nuanced acknowledgment that musical ontology arises at different points in different repertoires — seemed to imply that for each of those repertoires there is a point at which ontology comes into play.
So, again, there’s this notion that musical works either ‘do’ exist in the cultural currency of a given time and place, or ‘do not’. More crucially, there is a notion that musical works either ‘are’ the kinds of entities for which an ontological account can be given, or ‘are not’.

The problem with speaking about musical works in absolute terms is that the difference between a ‘piece of music’ and ‘a musical work’ is not, in fact, absolute. In other words, it is not intrinsic to the music: not to be found in the notes themselves, but rather in the way those notes are understood within broader cultural contexts. (If you doubt this, imagine being given a page of a score, with no information about the composer, date, or any other contextual information, and having to say whether the page came from a piece of music or a musical work. You would be at a loss. And indeed your first recourse would probably be to try to figure out when and where it was written, so as to pin down its extramusical dimensions.) What this means is that when we ask ‘whether Bach composed musical works’, we’re actually asking a complex bundle of questions that have no clear answer, and which rely on our answers to a vast array of sub-questions.

Now, already, the implications of the rhetorical question in my title should be coming into focus. By asking who needs an ontology of the musical work, I am pointing towards precisely the fact that who is asking the question will, to a large extent, determine the criteria they adopt, and thus the definition that ultimately emerges. In the case of Bach, it’s worth noting that the three questions we might pose — did Bach think he was writing musical works, did his contemporaries think so, and do we think so — will all receive different answers. It may well be that Bach’s contemporaries did not think he was composing musical works, yet it seems evident that the musical culture we currently inhabit, in the 21st century, does think so. (Bach, of all composers, has over the past few centuries become such a talisman of musical genius and monumentality! For many of us, the notion that Bach didn’t compose works would seem a provocation — far more so than the rhetorical question in my title!)

One of the disagreements at play here is related to history: over time, changing notions of creativity, genius, craftsmanship — as well as repetition, aesthetic individuation, and so forth — have contributed to changing notions of the musical work. Fine. In some sense, that’s why we’re talking about the issue in the first place: because some influential thinkers have argued that things were sufficiently different ‘back then’ so as to render problematic our modern viewpoints.

I’ll return to this question of historicity in a few minutes. But first, I want to point out that the chronological divide is by no means the only one that bears upon ontological disputes. The 'who' in ‘who needs an ontology of the musical work’ might refer to any number of different people. To name a handful of examples who have a direct interest in questions surrounding the work-concept: composers, listeners (concertgoers and Spotify-streamers alike), concert organizers, philosophers of music, critics, and record company executives. All of these people bring widely divergent preoccupations and problems to their thinking about musical works, and will therefore understand the nature of those works very differently. And indeed, I didn’t even mention ‘performers’ in that list — but here, too, within a single group, coexist students, amateurs, professionals of various kinds, and teachers. We should expect to find striking variation between the ways professional performers think about musical works, and the way (say) Suzuki violin students do. Indeed, we go even further, and speculate that perhaps professional performers conceive differently of musical works when they are in the practice room vs. when they are on stage.

The reason for this is simple: each of these people will be asking ontological questions for different reasons, and thus addressing different problems. (The problems of the record company are not those of the Suzuki student, just as the problems of an orchestral violinist are not those of a concerto soloist improvising a cadenza.) And it is those problems, in all their messy practicality, that impose constraints upon the kinds of answers each of person can and will accept. The boundary between a mere ‘piece of music’ and a ‘musical work’ is, as I’ve already said, not to be found in a given collection of notes, but in the mind and mindset of the person encountering those notes.

Now, from the perspective of some analytic philosophers, this may sound dangerously relativistic — yet in practice, we’ve treated musical ontology this way for decades. To take just one example (outdated enough to be fair game, crazy enough for the error to be obvious), Nelson Goodman’s criterion of ‘complete compliance’ (in which a performance of Beethoven’s 5th with a single wrong note is actually a performance of a different work) is often ridiculed precisely because, despite its analytic rigour, it so spectacularly fails to account for either the interests of listeners or the realities of performance. It is dismissed on the grounds that nobody with actual stakes in the matter could possibly take it seriously. So, the notion that an account of the musical work should be beholden to practicalities is not new. Yet it is both striking and puzzling to me that so many discussions proceed by first attempting to pin down the necessary and sufficient conditions for workhood, and then attempting to show that they did or did not obtain at various arbitrary historical moments.

The problem with this approach, as should now be clear, is not only that it starts by chasing a chimera. Even the second question — whether musical works existed in a given time and place — is based on a misconception. This is because everything I’ve just said about the divergent interests of modern musickers today also applies to musickers of the past. Claims that Bach ‘did’ write musical works, or that musical works ‘came into existence in 1800’ are simply not meaningful, since musical works both did and did not exist before 1800.

It is certain that, to many of Bach’s contemporaries, it made no sense to think about music as anything beyond ‘mere pieces’ — but, then again, we could say the same for some of Liszt’s contemporaries, and indeed for some people today.

Conversely, it is plausible that, in the minds of other historical figures, musical works did indeed exist long before 1800. I find it difficult to imagine that music of substantial quality can ever be composed without a robust (even if tacit) notion of the musical work. My reason for thinking this is that the best composers often depart radically from mainstream contemporary standards, and it is hard to imagine that they could do so without having a sense that they were doing something fundamentally different — dare I say ‘better’, and perhaps even ‘more important’, more ‘monumental’ — than their colleagues. This is as true of Monteverdi and Buxtehude as it is of Bach and Mozart, and any other pre-1800 figure who wrote great music. These composers were not doing the bare minimum, generating workaday minuets out of simple 8-bar phrases, or crafting cantatas from prefabricated schemata: although they did use schemata and 8-bar phrases, they also pursued more idiosyncratic aesthetic goals — and they often did so with an explicit sense that the fruits of their labours would hold lasting aesthetic value. Indeed, something along these lines must have occurred to them: otherwise, there would be no reason to eschew contemporary stylistic clichés, and every reason to rely on them. (This is because, as Leonard Meyer has shown at length, each compositional decision that departs from mainstream norms slows productivity considerably.) Yet even without dwelling upon such matters, I think it’s fair to assume that all people in art-making cultures have a work-concept of some sort: a notion that some objects are uniquely worthy of attention in a way that other, even superficially similar, objects are not.

What does all of this add up to? In some sense, I have returned to the point at which I began. I stated at the outset that the question of ‘whether Bach composed musical works’ is actually a proxy for a number of other questions, foremost among them: whether Bach thought he composed musical works. (A question which, as is now clear, I would answer in the affirmative.) However — and this is where I’ll live up to the name of the panel and become properly metahistorical — there is still an unsolved problem lurking in the background: namely, should we care about ‘what Bach thought he was doing’? Actually, let me put that more generously: I find it likely that Bach thought he was composing musical works; others find it likely that he did not. The question is: do his thoughts have any bearing on the reality of whether he did, in fact, compose musical works?

I suspect that the prevailing bias in this room is that yes, Bach’s thoughts matter. Yet there are reasons for skepticism. Like us, Bach was human — and, like us, Bach could be wrong about the nature of his own actions and motivations. His interests in alchemy, his religious convictions, his knowledge of planetary orbits (all six of them!), his apparent political sentiments — these seem quaintly outmoded by modern standards. When we investigate his beliefs on these matters, we do so for historical interest rather than to settle active debates about the nature of scientific investigation or political morality. His thoughts are, in other words, ‘mere’ (and I use the term advisedly) — ‘mere’ historical curiosities. Why should his thoughts — whatever they were — about the nature of musical works, be any different?

In one sense, his intuitions on the matter could not possibly be sound, because his understanding of the import of his style was necessarily incomplete. He could not have grasped the extent of his inheritance from previous composers, nor the place of each individual work within his oeuvre, nor the nature of his stylistic overlap with contemporaries, much less the cultural-evolutionary trends that his works would inspire in the centuries that followed. It is only with considerable temporal remove that these questions can be answered in any detail — and if for that reason alone, we should be reluctant to accept an account of musical ontology that rests on historical perspectives.

Thus, to return to that glib, rhetorical question of my original title — who needs an ontology of the musical work — perhaps the only reasonable answer is that, if anyone does, we do. Whether composers of the past thought of their compositions as ‘works’, or as pieces of music, or as formulas for successful one-off performances (or indeed in other terms entirely!) doesn’t change the fact that we, asking these questions in the 21st century, need not mechanically privilege historical perspectives when formulating answers.