Monday Math: A Rant About Jargon

Jerry Coyne calls our attention to this abstract, from a recent issue of Proceedings of the National Academy of Sciences:

We show how to measure the failure of the Whitney move in dimension 4 by constructing higher-order intersection invariants of Whitney towers built from iterated Whitney disks on immersed surfaces in 4-manifolds. For Whitney towers on immersed disks in the 4-ball, we identify some of these new invariants with previously known link invariants such as Milnor, Sato-Levine, and Arf invariants. We also define higher-order Sato-Levine and Arf invariants and show that these invariants detect the obstructions to framing a twisted Whitney tower. Together with Milnor invariants, these higher-order invariants are shown to classify the existence of (twisted) Whitney towers of increasing order in the 4-ball. A conjecture regarding the nontriviality of the higher-order Arf invariants is formulated, and related implications for filtrations of string links and 3-dimensional homology cylinders are described.

He then remarks, “This shows how far removed mathematics is from even other scientists. Or are our own biology abstracts just as opaque to mathematicians?”

Now, the first thing I would point out is that this abstract would be opaque to most mathematicians as well. For myself, I can recognize it as having something to do with differential topology, and there are a few phrases in there with which I am familiar, but I'd be hard-pressed to tell you what the paper is actually about.

Of course, jargon is an affliction common to just about every academic discipline, and not just in the sciences. I would say, though, that math is probably among the worst offenders. The abstract of a typical research paper in mathematics is opaque not just to non-mathematicians, but to all mathematicians who are not specialists in the particular research area being addressed. And when I say opaque, I mean opaque. As in, you won't make it past the first sentence.

Biology certainly is not as bad. In evolutionary biology I am definitely an amateur, but I find that I can often understand the introduction and discussion sections of a typical paper well enough to explain the gist to someone else. In math, it is usually impossible even to explain the problem to a non-mathematician.

Making matters worse is a mathematical culture that favors brevity and concision far, far more than it does clarity. Submit a paper with two consecutive sentences of exposition and watch how quickly the referee gets on you for it. A typical research journal in mathematics is just a stack of papers between covers. Compare this to a journal like Science or Nature. In addition to the formal research papers, they also have research summaries. These are typically written at a high level, but are also accessible enough so that a typical reader can understand what has been accomplished. And this is in addition to other non-research features, like editorials, book reviews and perspective articles. In mathematics there is none of that. Occasionally you get a survey article to keep you abreast of recent developments in some field or other, but these are just as opaque as the work they are describing.

Simply put, it is an awful, almost physically unpleasant experience to read a research paper in mathematics, at least if you want anything more than a superficial understanding of what was done. That is why it takes so damn long to get a paper through peer review. It's because every time the referee glances over at the paper sitting on top of the filing cabinet, he thinks of something else he'd rather be doing. If you learn the fate of your paper within six months you've beaten the odds, but it's even money that your paper will just disappear into the ether.

The fun part of doing research is when you go to a conference, meet the person who wrote the paper, and ask him to explain what he actually did. For one thing, usually the author is so delighted that anyone gives a crap about his paper that he will patiently spend hours, if necessary, explaining it to the thickest graduate student. For another, about ninety-five percent of the time the paper is ultimately pretty simple.

Sadly, the rot extends to math textbooks as well, which, with very few exceptions, are simply horrible. I mean really, really bad. It is commonly considered a great faux pas to actually explain what you're doing. You will be accused of being overly wordy if you do anything other than produce an endless sequence of definition-theorem-proof. Mathematicians too often seem to take absolute delight in being as opaque as possible. I can't tell you how many times I have heard friends and colleagues praise for their concision textbooks which, to my mind, are better described as harbingers of the apocalypse. If, as a textbook author, you place yourself in the student's shoes and try to anticipate the sorts of questions he is likely to have approaching the material for the first time, a great many of your colleagues will say that you have done it wrong.

Let me turn the floor over to Morris Kline, who, in his 1978 book Why the Professor Can't Teach nailed this issue perfectly:

But the decision is readily made. It is easier to say less. This decision is reinforced by the mathematician's preference for sparse writing. If challenged, he replies, “Are the facts there?” This is all one should ask. Correctness is the only criterion and any request for more explanation is met by a supercilious stare. Surely one must be stupid to require more explanation. Though brevity proves to be the soul of obscurity, it seems that the one precept about writing that mathematicians take seriously is that brevity is preferable above everything, even comprehensibility. The professor may understand what he writes but to the student he seems to be saying, “I have learned this material and now I defy you to learn it.” ...

A glaring deficiency of mathematics texts is the absence of motivation. The authors plunge into their subjects as though pursued by hungry lions. A typical introduction to a book or a chapter might read, “We shall now study linear vector spaces. A linear vector space is one which satisfies the following conditions...” The conditions are then stated and are followed almost immediately by theorems. Why anyone should study linear vector spaces and where the conditions come from are not discussed. The student, hurled into this strange space, is lost and cannot find his way.

Some introductions are not quite so abrupt. One finds the enlightening statement, “It might be well at this point to discuss...” Perhaps it is well enough for the author, but the student doesn't usually feel well about the ensuing discussion. A common variation of this opening states, “It is natural to ask...,” and this is followed by a question that even the most curious person would not think to ask.

Exactly right. What's tragic about this is that math, far more than most other subjects, really does make sense. You really can “figure it out” in a way that you often can't in other branches of inquiry.

The inability of so many mathematicians to place themselves in the shoes of their students was brought home to me as an undergraduate. I was a sophomore, and was just starting to get serious about mathematics. Browsing through the course catalog I noticed an entry for Differential Geometry. I had no idea what that was. I had never even heard that phrase before. So I went to the open house the math department held for people considering a math major, and I asked one of the professors the following question: “What is differential geometry?” He answered, appearing to believe sincerely that he was being helpful, with a jargon-rich description of some open problems in the field.

Equally scathing (and eloquent!) is Gian-Carlo Rota, in his book Indiscrete Thoughts:

By and large mathematicians write for the exclusive benefit of other mathematicians in their own field even when they lapse into “expository” work. A specialist in quantum groups will write only for the benefit and approval of other specialists in quantum groups. A leader in the theory of pseudo-parabolic partial differential equations in quasi-convex domains will not stoop to being understood by specialists in quasi-parabolic partial differential equations in pseudo-convex domains....

The bane of expository work is Professor Neanderthal of Redwood Poly. In his time, Professor Neanderthal studied noncommutative ring theory with the great X, and over the years, despite heavy teaching and administrative commitments (including a deanship), he has found time to publish three notes on idempotents (of which he is justly proud) in the Proceedings of the American Mathematical Society.

Professor Neanderthal has not failed to keep up with the latest developments in noncommutative ring theory. Given more time, he would surely have written the definitive treatment of the subject. After buying his copy of Y. T. Lam's long-expected treatise at his local college bookstore, Professor Neanderthal will spend a few days perusing the volume, after which he will be confirmed in his darkest suspicions: the author does not include even a mention, let alone a proof, of the Worpitzky-Yamamoto theorem! Never mind that the Worpitzky-Yamamoto theorem is an isolated result known only to a few initiates (or perverts, as graduate students whisper behind the professor's back). In Professor Neanderthal's head the omission of his favorite result is serious enough to damn the whole work. It matters little that all the main facts on noncommutative rings are given the clearest exposition ever, with definitive proofs, the right examples, and a well thought out logical sequence respecting the history of the subject.

I recall reading, the last time the Field's Medals were awarded, descriptions of the work that earned the recipient's their awards. They were written in the usual format, with thick, dense jargon starting right in the opening sentence. It is as though it never even occurred to the writer to make his description accessible to mathematicians outside his own, narrow research specialty. Now, I grant you that I am not in the forefront of research mathematics. Though I try to keep one foot in the research world, I do not see myself primarily as a researcher. But come on! I do have a PhD in the subject, and I have been a professional mathematician for fifteen years (starting the count when I entered graduate school.) And yet, I am unable to explain the accomplishments of the modern giants in my discipline. Very frustrating.

That, at any rate, is the bad news. That attitude is still very prevalent among the top research schools, and is even more oppressive in second-rate departments pretending to be among the first-tier. But my impression is that it is far less popular than it used to be. I think there has been a resurgence of interest in good expository writing, and of not being quite so arrogant and dogmatic about what's important in mathematics. The mathematical world is vast, and there's plenty of room for everyone. Research is important, of course, but so is teaching and pedagogy and outreach and attempts to let the rest of the world know why they should care about what we are up to.

More like this

I am not certain that mathematics could really be any other way. It is unique in that it is not hampered by having to be about anything real. There are some other fields of study that are are likewise unencumbered, but they are reigned in by pretending to be about reality. But mathematics disdains reality. The more abstract the better.

To use a mathematical metaphor, the whole field is like the Mandelbrot set, each section a unique entity onto itself, but wholly connected, often by threads so thin as to be invisible. Whole careers are spent on the smallest lobe of the set. But no progress would be made at all if they were limited to standard English to describe their work. Any attempt to standardize the terminology would simply hide what makes their particular area unique and make further progress impossible.

Of course, this also guarantees that no one else is likely to use the terms they coin. And once they commit those words into their thinking, no one else will even understand what they are talking about.

Terrance Tao and Timothy Gowers are great examples of world-renowned mathematicians (both have won the Fields medal) who are also great writers. Their blogs can sometimes be very technical, but they can also write very good survey and general interest articles. Gowers had a recent piece explaining the AV+ voting process, for example.

By Donal Henry (not verified) on 23 May 2011 #permalink

A glaring deficiency of mathematics texts is the absence of motivation.

That's where the teacher is supposed to come in. Unfortunately, many faculty pride themselves on their rigor, which usually means that they skip the motivation bit.

What always bothered me, was that textbooks are chosen to impress other faculty rather than for readability by the students. I think I once suggested "Calculus Made Easy" (Sylvanus P. Thompson), but the others on the committee thought that it lacked rigor.

And then people wonder why so many students are turned off by mathematics.

It's certainly not true of your Monty Haul book though - I found that to be very accessible, though perhaps your peers would have found it to be insufficiently terse.

I have often accused the current "postmodern" and "deconstructionist" literature criticism of being opaque, and I have claimed that this results from the authors writing only for the other initiates of the field, ones that have a thorough indoctrination in the culture and conventions of lit-crit, and sometimes (often?) more to display the authors' cleverness instead of the results or insights they purport to have. This, it would seem, leads to a falling spiral; having only a tangle of their own culture for reference, they diverge ever farther from the general audience.

I had not really thought about it, but I totally can see how the identical phenomenon can take place in mathematics. Initiates writing for initiates --> closed-circle esotericism.

To some degree advanced textbooks, research papers, and the like are always going to have, as their audience, fellow "initiates" as Pirvonen refers to them. Whether or not that is a failing is indeed an interesting question, but it is probably unavoidable to some extent. An advanced textbook on Java, for example, isn't going to explain the basics of object oriented programming to a complete novice.

However, in that case, the argument would be that there are other, novice level textbooks that do explain the basics of OO, good ones aimed at the complete novice that do explain the concepts in terms of "why" as well as "how". I think Jason's point is that there are comparatively few mathematics textbooks, even for relative novices (say undergraduates) that are accessible to their target audience (which in the case of undergraduates includes engineering and science students other than those intending to pursue mathematics majors).

Donal - I'm working my way through Gowers' "Princeton Companion to Mathematics" right now. I've noticed that every time I come to one of Tao's contributions it lightens my day a little. So 100% agreement here. (I certainly understand complex analysis better now after reading the introductory chapter to the Companion than I did after taking a "sink or swim" course taught by _physicists_ at University.)

I don't think that abstract is that bad. The concept of an topological invariant is easily understand even by laypeople, and that's enough to understand the general gist of the abstract.

Maybe this is part of the trend of expository writing increasing in popularity, but in my time in grad school (just about to finish) I've found the books that do a good job of expository writing (Spivak and Eisenbud jump to mind) are widely praised. Of course I still agree that in general mathematics is very frustrating to read (and write!).

As several people have observed, there's nothing wrong with jargon in its place. Between people in the field, it makes life a lot easier, and communication one hell of a lot quicker. However, you need to be aware that you're using jargon, and be willing to switch registers depending on who you're talking/writing to. This is something that most intelligent people have no trouble with in other contexts (most of us don't swear in front of our Granny, or use "whom" in casual speech with our friends); it's odd that so many don't seem to realise the opacity of jargon to outsiders.

Case in point: William @#8. I understood "iterated", "non-triviality", and possibly "higher-order" (depending on context). "Topological invariants" are something I've heard of, but have no idea what they are. Admittedly my last maths was A-level (with only a little reading beyond, and a brief foray into formal logic at Uni), but the jargon is pretty opaque! If you define layperson as someone from a different maths field, you may be right that that's broadly comprehensible; but I doubt a typical undergrad even in the physical sciences would "get the general gist".

By stripey_cat (not verified) on 24 May 2011 #permalink

Is part of the problem with this specific paper that it's in PNAS? My advisor refers to that as a "political journal," refuses to submit papers to it, and claims that you can get stuff in pretty much peer-review-free if you know someone. And, indeed, I know of some *crap* studies in there that nevertheless get cited like crazy.

Anyone have firsthand details of anything like this regarding PNAS?

Rosie, that claim was made in the Golden Ratio book, but the author (Mario Livio) argued that it was silly and petty. Then again, I don't think he's a straight-up mathematician.

By cheglabratjoe (not verified) on 24 May 2011 #permalink

It is good to hear this being acknowledged even by a mathematician. As a physicist who has served on a few PhD thesis committees for mathematics students, it is like pulling teeth trying to get them to put in a few sentences about motivation and significance. I usually receive puzzled looks, as if I had asked the student to present his thesis while juggling chainsaws on the back of an elephant.

By Hamilton Jacobi (not verified) on 24 May 2011 #permalink

Case in point: William @#8. I understood "iterated", "non-triviality", and possibly "higher-order" (depending on context). "Topological invariants" are something I've heard of, but have no idea what they are.

I'm a physicist by training, and I didn't get much further than this. I think I know what a manifold is, and I see that there are different categories of invariants which are named after people (presumably the mathematicians who wrote papers about those categories of invariants). I also recognize "4-ball" as a higher dimensional analog of the sphere, which would be a "2-ball" in the jargon (the number refers to the dimensionality of the surface). That's about all I get that stripey_cat didn't.

I suspect this has been the situation for a long time. In Tom Lehrer's song "Lobachevsky" (written in the early 1950s), the narrator describes being assigned the task of writing a paper "on analytic and algebraic topology of local Euclidean metrization of infinitely differentiable Riemanninan manifold". I know enough to suspect that description is word salad, but not enough to be sure. The narrator's entirely proper reaction: "Bozhe moi!", which I think is Russian for "Oh my God!"

If you're writing a paper for a specialty journal where you can assume that your audience also knows the jargon, that's fine. It is not appropriate for an undergraduate textbook.

By Eric Lund (not verified) on 24 May 2011 #permalink

What immediately came to mind when I read "Are the facts there?â This is all one should ask. Correctness is the only criterion and any request for more explanation is met by a supercilious stare."
Is what often happens in software development, many developers feel that because the code (facts) is there it should be obvious. Though quite often maintaining it down the road becomes a pain because one often has no context as to why decisions were made.
Perhaps this is why computer science and math people often overlap.

The problem with college math textbooks is that they are written to be used in a traditional college course where the instructor supplies the motivation and the real time editing of the material. So they make lousy self-study books (and worse recreational reading) since they are over stuffed with examples and the topics are written so as to (1) impress the faculty who choose among the competing texts, which means "include every possible topic", and (2) insure that no reviewer says anything like "the level of this book is too low ...." which will insure that no one will use it AND that the author won't get full due for it as a publication. Readability by students is of low priority, and even some students would prefer to use a prestigious difficult book over, say, a readable one which might suggest to an outsider that the course is not rigorous.

By Ned Rosen (not verified) on 24 May 2011 #permalink

I recall Green's Theorem when i was working on my Engineering degree. One of my buddies was also taking the course, and asked me about it. So we worked on it for a couple days and more or less figured it out. In class, someone asked the prof about it, and the prof patiently explained how it worked. By then, it seemed "obvious" to me. And it was great teaching, or so i thought. However, all engineering math has some practical use, and i had no idea of any applications for Green's Theorem. I've since heard it has some Electrical Engineering use. Well, that's something.

Some of the engineering courses, like Controls Engineering (which has nothing to do with electronics, per se), or thermodynamics went the other way. We've got a problem, and you have to use this math to solve it. But these courses expected you to have already learned the math. My thermo course did not have the appropriate math as a prereq, however. The claim was that you could get through the course without it. Maybe. But could you become competent in thermo without differential equations? No. You have to know the application for the math to make any sense. So, you really have to teach both at once.

Part of the problem is that you have to get good at abstraction to do math. To do that, you have to get good at inventing symbols, and sometimes grammars, and if the problem is big enough, whole languages. So it's not a surprise that mathematicians speak their own language.

Old, old math joke: Professor spends first 15 minutes of period filling two blackboards with equations, then steps back and announces: "From there it is obvious." Next 15 minutes, he just stands there, pondering the boards, immersed in thought. Then he steps out into hall, lights up cigarette and paces furiously back and forth, thinking, thinking. Just before period ends, rushes back into classroom and announces "Yes! It IS obvious!"

The fundamental problem is that math papers rarely answer the simplest question: Why the hell should the reader care? It's even more irritating when they start leveraging theorems you've never heard of to prove results which make no sense. And then ... eventually ... it dawns on you that they're just doing something trivial while you thought they were doing something useful. And then you wonder why you spent 4 hours reading something you could have guessed. It would save a lot of time if they explained what they did in English. (And then proved it in math for those who want to double check the result.)

I am a retired mathematics professor. I think that Jason is spot on about textbooks: there should be more explanation. But that bumps into another problem: math textbooks are already too long because there is too much material. The solution is to have fewer topics, instead of trying to be all things to all professors.

A big difference between math and biology (and chemistry and physics and...) is in the required background - not the quantity of it, but its nature.

Let's look at another article from the same issue (May 17, 2011) of PNAS. The article is advertised as: "Ebola virus parthogenesis". Sounds interesting. Here's the title of the article: "T-cell immunoglobulin and mucin domain 1 (TIM-1) is a receptor for Zaire Ebolavirus and Lake Victoria Marburgvirus". Here's part of the abstract: "Here, we show that T-cell Ig and mucin domain 1 (TIM-1) binds to the receptor binding domain of the Zaire Ebola virus (EBOV) glycoprotein, and ectopic TIM-1 expression in poorly permissive cells enhances EBOV infection by 10- to 30-fold. Conversely, reduction of cell-surface expression of TIM-1 by RNAi decreased infection of highly permissive Vero cells."

I have a rough idea what some of those words mean, and with a few hours work I could probably get a better idea, but it would take weeks or months to truly understand what "T-cell Ig" means. By "understand" I don't mean "oh, it's an immune cell blah blah". I mean, what is its significance in the context of, and how does it interact with, the rest of the immune system; what are all the biomolecular mechanisms by which it acts; etc. For the people who actually study this stuff, the simple phrase "T-cell Ig" immediately brings up this vast body of knowledge, none of which would be appropriate for inclusion in the abstract.

The phrase "Whitney move" similarly encapsulates a vast body of knowledge, none of which is appropriate for the abstract, but if you know it (and a few other things) you understand the abstract. The body of knowledge behind the Whitney move is a somewhat larger body, though, which brings me to my next point.

One of the defining characteristics of science is that it be reproducible. There was at least one experiment performed for the "Ebola virus parthogenesis" article. Suppose I wanted to replicate that experiment. I have absolutely no idea where to start. If I read the article, I might get a clue, and they might even mention some of the equipment they used. I could (assuming I had enough money) go buy the equipment. Then I'd have to set up my own lab; the article certaintly doesn't explain how I'd go about doing that. But let's suppose I've set up my lab. Then I have to learn how to use the equipment; I'm guessing it doesn't have a quick-start guide like the one that came with my television.

Similarly, if I wanted to replicate the failure of the Whitney move construction, I'd have to spend quite some time learning the necessary background. I wouldn't have to set up a lab, but it would still take a lot of time. I already know what "invariant", "immersed surfaces", "obstruction", and a few other things mean, which are perhaps analogous to knowing how to operate the specialized lab equipment (or at least knowing what it does); they are pretty basic things that "everybody knows", yet they are absolutely required to do any work in the field.

The opacity you see in mathematics is analogous to the difficulty in setting up a bio lab. It is simply a reflection of the fact that there is a lot of background required for the subject. It just so happens that, in the case of mathematics, all of that background is in the form of definitions, theorems, and proofs, and not very many people are deluded into believing they understand the article (or even the abstract). In the case of biology, though, a casual reader can look at the pretty pictures of the cells, and then read some things on wikipedia (or even a real book) and see more pretty pictures and maybe some cartoons showing a protein binding to a receptor, become deluded into believing they understand what the abstract is saying, and then interpret the "Conversely..." part as "Cool, they cured Ebola!" (I am exaggerating for effect, slightly.)

Which brings me to my main point: doing mathematics and doing biology are both difficult, but due to the very nature of the subjects, comprehending the results of someone else's work is more difficult in mathematics than in biology. (There are exceptions, of course. Fermat's last theorem is probably the best exception; high school students can understand the statement, but the proof is, if I recall correctly, hundreds of pages of very dense abstract work. I don't think anybody criticized Wiles for not making it more expository, but there was some criticism that his proof wasn't actually complete, and they had to do a bit more work to finish it.)

I agree that mathematics articles are sometimes a bit lacking in terms of "expository" stuff; often I'll see terms used in the introduction which are not defined in the article, nor is there a citation specifically for that term. I also agree that some mathematics articles are written as though the authors are not using their native language, even when they are. This is, however, nothing more than anecdata; I don't know what the situation is in other subjects.

Having said all that, though... You said "it is an awful, almost physically unpleasant experience to read a research paper in mathematics, at least if you want anything more than a superficial understanding of what was done." I found found several of the "lay-person" books which I have read to be somewhat painful, because they go so far out of their way drawing analogies, making things understandable, and so on, that it is difficult to see any actual technical content; worse, what I do see is sometimes rendered nearly incorrect through the use of analogy. (No, I'm not going to name names.)

@DC Dan: agree re. the impact of software style on maintainability (having both authored my own code and maintained others'), but I would argue that "the code" is analogous to the statement of a theorem; the documentation should include what are analogous to the definitions and proof, but (in both math and software) a reasonable level of background should be assumed (a mathematician doesn't need to define "continuous" any more than a software developer needs to define "sort"). Math and comp. sci. overlap because they are both very logic-oriented, not because of writing style.

I actually had the same reaction to that abstract as Jerry Coyne did, but after about 10 minutes, reading the beginning of the paper with some Wikipedia referencing, I realized that the main problem was a lack of familiarity with the "Whitney" terms. Not that I could easily become an expert on them, but I now realize that the "Whitney move" refers to a generalization of an trick that I've used in many different contexts and had no name for.

In fact, given my personal background, I could probably understand most of this paper more easily than most biology papers. And yet that abstract was still very much intimidating to me. So, with this single anecdote as non-data, I boldly, timidly wonder if the jargon problem is more about getting psyched out than about the difficulty of the actual subject matter.

I mean, if you have to learn about a particular kind of cell, well, at least you probably know what a cell is, and some basic things about them, and so you are refining something you already have in your head. And it's an actual thing out there in the world somewhere, so at least you know that statements made about it refer to actual physical objects.

But if you encounter a phrase like "Whitney tower", not only do you not even really know what kind of thing is being talked about, but you might not yet have any concepts in your head that provide a useful raw material for refining your picture of what's happening. For all you know, this is something that might not be physically instantiated in anything you have encountered (or even possibly could encounter). And that's much more intimidating. Even if it is a kind of thing that you already know about, the fact that the term is unfamiliar implies that it will take some work to connect "Whitney tower" to the (probably less rigorous or less general) concepts in your head that it should go with, to actually search through your head to find something to suffice as a good first rough idea of what this thing is.

By quantheory (not verified) on 24 May 2011 #permalink

I think part of the problem for undergraduates is a lack of meta-knowledge. I don't think most of 'em ever get what the point is of proving a theorem or what it really means to prove a result. They simply memorize a series of steps, rather as kids in Pakistani Madrassas memorize the Koran in Arabic without ever knowing what it means. I recall one young gal I tutored who had taken a prealgebra course the semester before but was surprised to discover that the rules for multiplying negative numbers still applied when she switched to algebra. "Is that still supposed to be true?" I gather that research has been done on this problem, but it turns out to be fiendishly difficult to explain what you're doing when you do mathematics. Whitney towers are the least of it.

IMO, the biggest problem with jargon in biology (or at least, those fields that are molecular oriented) is the complete and total clusterfuck that is gene and protein nomenclature. Things get called by whatever they were first called, which usually has nothing to do with their structure or function, and in many cases is actively misleading. Authors identify a protein that interacts with something or is found in a particular disease state, and so it gets called, "Mouse such-and-so associated small protein 3 alpha", which no one wants to repeat more than once. So it gets shortened to an acronym. Sometimes, the same acronym accidentally gets used for more than one protein. Other times, the same protein has more than one name because it was found by more than one group. These things eventually get hashed out, but cause all kinds of confusion in the short-term. None of it would be such a big deal if not for the fact that there are many thousands of genes and proteins out there, and more entering the literature all the time. And so papers have to go through a sort of Entmoot wherein the many different names and ways of referring to a protein have to be laid out before getting around to the actual science.

RE: kevin's post at 5:30 PM

My background is in behavioral neuroscience, not immunology, but here's what I'm getting out of the few sentences from the abstract you've quoted:

The ebola virus has a protein with some sugar attached to it, and that protein is important to how the virus infects cells. The infection-related virus protein sticks to another protein called TIM-1.

Some cell cultures are more vulnerable to infection than others. If you add TIM-1 protein to less-vulnerable cells, they're more easily infected with ebola. If you take already-vulnerable cells and interfere with TIM-1, the cells are less vulnerable. Therefore, TIM-1 seems to have something important to do with ebola infections.

The technical details require a lot of background to understand, but the basic idea is straightforward.

I spent a small amount of time in grad school doing high performance liquid chromatography (HPLC), but didn't pursue it. The methods section of a paper that uses HPLC don't mean much to me. At the same time, my advisor described HPLC in terms anybody could understand: "you put things into the machine at the same time, and they come out at different times."

There are structures in the brain with the names "substantia nigra" and "locus coeruleus." These names are technical jargon that must be memorized. They are Latin for "the black stuff" and "the blue place," which is how they looked to early anatomists.

I know that math is hard and that rigor and formalism are necessary, but I can't help but get the impression that a lot of the obscurity is unnecessary. I can't pretend to understand the mathematical basis of the theory of relativity, but even Einstein said that "You do not really understand something unless you can explain it to your grandmother."

If an idea is truly smart, it's impressive on its own and doesn't need layers and layers of obscure jargon to make it more impressive. If an idea or experiment isn't that novel or clever, it needs a lot of jargon to make it seem that way to people who don't know any better.

By inverse_agonist (not verified) on 24 May 2011 #permalink

Scientific illiteracy is a far more worrying problem.

"If an idea is truly smart, it's impressive on its own and doesn't need layers and layers of obscure jargon to make it more impressive. If an idea or experiment isn't that novel or clever, it needs a lot of jargon to make it seem that way to people who don't know any better.
"

Well, that's exactly it, isn't it? Most papers weren't worth publishing. Fluffing them up with jargon makes them publishable, sometimes.

If you think it's bad for the academics, try the practitioners. I've spent the past weeks skimming almost 100 papers about time series analysis. Almost none were well-written, and -- surprise -- the ones that were also tended to have the most implementable and advancing-the-art ideas.

This cuts both ways, I assure you. If your paper is awesome but incomprehensible, it will take.long time before anyone outside your narrow circle adopts it, and by then someone else who did a better job making the same idea comprehensible will come after and get it named after him.

Hope I'm not too late to continue the discussion...

stripey_cat @10 & Eric @ 13: You should learn some algebraic topology. It's a subject with very intuitive motivations... roughly stated, a desire to be able to solve problems like "Imagine you have a closed loop of string all tied up into a knot, can you untie it without cutting the string?" Turns out there are knots where this is impossible, but the real challenge is telling them apart from the knots where it is possible. This is where the invariants come in handy. Allan Hatcher's book on algebraic topology should probably be the model for how to make a readable graduate-level textbook, and it's available for free on his website.

Oh yes, the subject was also invented and developed entirely in the 20th century, along with what seems like most of mathematics. And that's the real problem... most people don't realize it, but a standard undergraduate degree in pure mathematics only catches you up to the beginning of the 20th century. The best you can do is catch some of the highlights of the 20th century (linear programming, graph theory, etc) in a discrete math course, computer science course, or something of that nature.

There's a lot more mathematics out there than people realize. Think about how different graph theory is from, say, calculus. All of these new 20th century subjects involve real mathematical ideas, new ideas which are interesting in their own right. Lots of people have time to learn graph theory. Very few people have time to learn algebraic topology, although it can be easily taught as an upper division course undergraduate course (and knot theory could be included in a lower-division discrete math course).

If you were to gather a list of books, if any exist(!), that are approachable to the the enthusiast but non-expert in the field, I an I amsureany others would be very grateful.

(Oh, Hai! Female mathematician suggests that %s/he/they/g and %s/his/their/g would fix around 30 errors in your article, although would introduce around five. My views on article largely echo those of Eric Lund.)

By Rinka Dzidzovic (not verified) on 25 May 2011 #permalink

Eric Lund, one small correction: a 2-ball is a disc, whose boundary is the 1-sphere, also known as the circle; a 3-ball is a solid ball, whose boundary is the 2-sphere (i.e., "the" sphere); a 4-ball is a 4-dimensional thing whose boundary is the 3-sphere; etc. So you have an off-by-one error.

I don't think the Lehrer thing is quite word salad, but a Riemannian manifold is *by definition* infinitely differentiable (in so far as you can apply that term to a manifold) and *by definition* has a locally Euclidean metrization, though I suppose you could ask what others it has, maybe getting something a bit like Milnor's "exotic spheres". Incidentally, see http://mathoverflow.net/questions/36139/analytic-and-algebraic-topology… .

I'd also be very interested in a list of math texts that people have found accessible for those of us interested in self-study. I have the better part of an engineering degree, but I've always struggled to achieve a deep (or even subtle, sometimes!) understanding of material when self-learning.

When I was a teenager, I could pick up my high school maths textbooks and there'd be perfectly accessible explanations for novel concepts. I thrived. Having entered university, I've lost this way of learning.

My favorite illustration of this, as related by Marvin Minsky to Otto Laske (http://web.media.mit.edu/~minsky/papers/Laske.Interview.Music.txt):
MM: ...And this reminds me of a different experience when I was a student; reading books on mathematics always seemed peculiarly difficult and always took a long time. But one day, I ran across a book by John von Neumann on "Mathematical Foundations of Quantum Theory" and it was the clearest, most pleasant mathematics book I'd ever read. I remember understanding his explanation of the Metric Density theorem as like a real time experience, like being inside another person's head. (This is a theorem about probability or, more precisely, about measurable sets. It says, for example, that if a certain subset U of a square S has probability larger than zero -- that is, if when you throw a dart at the square there is a non-zero probability of hitting a point in U -- then there must exist smaller squares in S in which the U-points have probabilities arbitrarily close to 1.) I mentioned this to my friend Stanislaw Ulam, who had worked with von Neumann, and Ulam thought he recalled that von Neumann had been in a hurry when he wrote the book, it was a first draft and he never revised it. And so, he was writing down how he thought about the problems.
OL: He improvised the book ...
MM: Yes, and I found that once I had read it, I never have to go back. Whereas most other math books have been made terribly tight; all extra or "unnecessary": words removed, and all traces of how the author actually thinks about the problem. My point is that it can take much
longer to read a shorter book with the same content.

The software company where I worked once had an intern who was an undergrad in Electrical Engineer/Computer Science. At work, we encouraged the use of long identifiers for variables and function names to make the code clearer and easier to maintain. He applied the same technique to a class assignment and was graded down because his identifiers were insufficiently terse. Predictably, the teacher was a CS professor who had been in the math department before CS was moved to engineering.

I have a BSc in mathematics, and I even vaguely remember my topology course, but that abstract was completely opaque to me.

I also remember my probablility course: We spent the entire term on random walks and Markov chains. If asked if I know anything about probability, I reply that I do not. In a subsequent statistics course, I had to quickly teach myself the binomial distribution -- that is another reason why some people hate mathematics. I also remember the complaints of a fellow student who was taking calculus 1 and physics 1 simultaneously.

I work as a programmer, and in one of my first programming classes, the instructor said, "In the real world, any program will be modified at least every two years. Also, any program you have written more than six months ago might as well have been written by someone else. You should write your program as if the person maintaining it is a homocidal maniac who knows where you live." I remember all too well a program that I had to modify which had a subroutine in assembler. It took me half a day to figure out what it did, and another day to figure out why it was doing it. A few lines of comments at the top would have saved me that time -- I inserted the comments as part of my modification.

I wrote my masters thesis in Computer Science demonstrating that some of the things that Donald Knuth wrote in volume 3 of The Art of Computer Programming were, in fact, wrong. For example, quicksort is given as a more efficient sorting algorithm than bubble sort (two examples of computer jargon) because, in a worst case scenario of a file with n records, quicksort has to do only n x log n switches, while bubble sort would do n squared switches, and log n is less than n for any n greater than 1. However, the standard implimentation of quicksort is as a recursive subroutine (ie, a subroutine that calls itself), while bubble sort is not. Thus, the time savings from the more efficient algorithm is more than eaten up by the multiple subroutines calls. For those of you who find this opaque, Knuth proved that quicksort was theoretically better than bubble sort, while I showed that on a real computer, bubble sort was actually better -- and for files of less than about 5,000 records, there was no appreciable difference.

And that is a problem I found with computer science professors: They tended to concentrate on the theoretical, not the practical.

John, Knuth wasn't wrong. You showed empirically that your implementation of bubblesort outperforms your implementation of quicksort on the architecture that you were using. Knuth proved theoretically that the running time of quicksort grows at a slower rate than the running time of bubblesort under an abstract model of computation that [is thought to] encompass all models of computation.

You can't show that every implementation of bubblesort in any language will outperform every implementation of quicksort on every computer than anyone will ever manufacture. But that's what you would have to do to prove that Knuth was wrong.

It is certainly worthwhile to understand the empirical performance of implementations of algorithms on real-world architectures, but that's not the point of Knuth's book or of an algorithms course. Speaking as a CS professor, I am certainly interested in the practical, but the fact is that what was practical 10 years ago is not what's practical today. If I only teach what's practical today, I'm not preparing my students for 10 years from now. I need to prepare them to think about why they're doing what they're doing so that they can adapt to the rapid change in what's possible.

re. Einstein explaining relativity to his grandmother: see xkcd 895.

alex: "Scientific illiteracy is a far more worrying problem."

Isn't that what we're talking about? Knowing what words mean is necessary for literacy. In education, to the extent that excessive or unnecessary jargon is an impediment to learning, it is a problem; to the extent that appropriate jargon is necessary for communicating (hence learning) efficiently, it is a solution. Key words: "excessive", "unnecessary", and "appropriate". The thresholds for those are rather different in education vs. the abstract (or content) of a research article. Even within education, the thresholds vary depending on the level of education.

(Total thread derailment: irrationality, intellectual laziness, and lack of curiousity are more worrying problems.)

John @34: I'm going to expand a bit on Lylebot's remarks.

When a computer scientist says that bubblesort is an O(N^2) algorithm, he means that the run time of the algorithm on a data set of N elements is a2 * N^2 + a1 * N + a0. (In physics, we use the term in a similar fashion.) Similarly, quicksort is said to be O(N log N) because it has a run time of b1 * N * log(N) + c1 * N + b0 * log(N) + c0. The a's, b's, and c's depend on the implementation and architecture that's running the program. It does not surprise me that b1 might be greater--even substantially so--than a2, as you apparently found in your experiments. Also, for small to moderate values of N (and 5000 is moderate in many applications; I sometimes work with data files that have millions of points), the lower order terms might be significant; for instance, on a system with a slow disk a1 and c1 might be noticeable. But if you make N sufficiently large, the leading term always dominates. If I needed to sort a few dozen records and I didn't have a canned sort routine available, I might well decide that the reduced effort of writing a bubblesort routine would be enough to justify using that routine. If I needed to sort millions of records, and to do so on a regular basis, that would justify the extra effort to implement a quicksort routine.

By Eric Lund (not verified) on 26 May 2011 #permalink

quantheory: But if you encounter a phrase like "Whitney tower", not only do you not even really know what kind of thing is being talked about, but you might not yet have any concepts in your head that provide a useful raw material for refining your picture of what's happening.

Or worse, while you will have concepts that provide raw material, there may well be so many intermediate steps that it is the mental equivalent of trying to build a Saturn V rocket after having been dumped into the middle of an uninhabited continent; meaning "useful" is a bit of a stretch.

Contrariwise, this is an inherent hazard of abstract mathematics. Simple-seeming questions using few mathematical tools, such as whether Fermat's last theorem is correct, may involve deeply bizarre digressions to build mental tools powerful enough to make answers. Those digressions can then be used to ask questions that were not previously possible to ask... which may require even more powerful tools.

Now, repeat for a couple hundred years.

That said, abstract mathematicians aren't making matters any easier.

Context is important. Is the audience the general public? Interested novices? Fellow scientists/mathematicians/engineers/technicians in that particular field? Fellow experts working in the specific area of the field?

In this case, I disagree with everyone who uses the word "jargon" indiscriminately. Precise meaning requires precise words - or tediously long expositions explaining every little thing. I have certainly read abstracts of articles on biology, chemistry, physics, etc. that are quite opaque to me, because I do not know what the words being used mean. However, experts working in that particular area of that field do know what they mean, that is what matters, and that is all that matters.

Now, if the context is different from that of an expert communicating to fellow experts, then obviously changes in communication are required by the communicator because precision of meaning is not that important since you just need to get "the idea" across, but some people don't necessarily make the switch in communication modes effectively, whereas others are very good at perceiving communication context and switching their mode of communication appropriately.

"Jargon" is a pejorative used often (not always, but often) by people who are wrongly criticizing a communicator for having used precise technical language in the correct context, merely because they are promoting some form of anti-intellectualism. Just because I am either unfamiliar with (ignorant of) or not capable of comprehending the language that experts in specific fields are using to communicate to each other in a precise, high-information-density manner is utterly irrelevant to the utility of using such language.

By Steve Greene (not verified) on 27 May 2011 #permalink

A few more comments on my #34 and the follow-ups. I refuse to believe that any recursive algorithm will ever out-perform a fairly short in-line routine. The overhead for subroutine calls is never going to go away.

I should have said that I did the work in 1984, on a DEC VAX 11/780 running Unix SVR2. The code was written in K&R C.

My thesis arose from a practical problem I had at the time: I had to sort a file with about ten million records that was in fairly good order. I wanted to know what the best algorithm would be for that. It turned out, both theoretically and practically, the most efficient algorithm was Shell sort. I then tried various algorithms with files of varying order (pretty well sorted already, random order, and inverse order) and number of records. I discovered that on the VAX, with files of over 150K records, both quicksort and heap sort (which were recursive algorithms) failed because they ran out of stack space.

I just re-read my last two paragraphs, and noticed that it was filled with jargon. But I was writing them in response to a couple of posters who were obviously "persons skilled in the art" (as patent law puts it). And that is a proper use of jargon. An experienced programmer should know what a "stack" is. Similarly, anyone dealing with computers in the 1980s would recognise "DEC VAX 11/780", "Unix SVR2" and "K&R C" -- these are all "terms of art", and the comments by Steve Greene in #39 would apply.

More years ago than I care to remember I was denied the chance to do a reasonably rigorous level of high school maths because I was considered incapable of understanding it. By some strange miracle I manged to matriculate for Uni to study science. It was a mixture of horror and delight when I strolled in to my first year maths class and encountered calculus for the first time. I saw maths as no longer some dry formulae with no apparent usefulness but an art with a beauty all its own.

Unfortunately most of it was beyond my comprehension and a serious discussion with the prof at the end of the lecture saw me putting Uni maths on hold to study the relevant high school maths for six months at a technical college. I was not helped by a teacher who when questioned responded with "It is too obvious to bother explaining" when I couldn't follow the topic. I persevered and after six months managed to learn enough to survive the necessary uni maths.

I still struggle and have used some of what I have learned in my own research but most of it has died with the grey cells that contained it. I am still delighted though that my stepson is now doing the same sort of maths that was initially forbidden to me and I am still capable of helping him with some of the problems.

One thing I do remember with great delight was the prof covering three boards attempting to solve a problem before he himself got stuck. There were smiles all around when a student pointed out the mistake he had made two boards back. When corrected it took two lines to solve the problem.