Friday, December 28, 2012

The stable state PhD equations

Down in the hall we have a nice poster with faces of all faculty, staff members, postdocs, and grad students. It is not updated too frequently, but overall reflects the structure of the department. A great idea, by the way; each time I forget one of these strange English names people have in this country, I just go down and refresh it in my memory.

But seeing this poster made me ponder on the following: how does this distribution of roles in our department match the career plans of said grad students and postdocs? If our university were the only one in the Universe (or if all other universities followed about the same organizational structure), would this system be sustainable? Like in chemistry, you know, when you have a system of reactions, and you know the kinetic constants for each of them, you can calculate the concentrations, and vice versa. What's about the PhD pyramid?

So, we have about N professors, and about N postdocs working for them, and about 3/2*N aspiring grad students. Assuming (again) that one spends 5 years in grad school, then ~7 years as a postdoc, and ~30 years as a professor, what would be the probability of getting a TT position in a world like that?

Well, the formula is simple: it's just a ratio of the frequency at which new candidates are popping out, and a frequency at which old professors retire. So the total probability, from the grad student point of view, would be: p = N_TT/Time_TT/(N_grad/Time_grad) = 11% . Had the Universe be molded after our department, about half of the candidates would be fired at the grad-school-to-postdoc stage, and then about one fifth of the postdocs would make it from the postdochood to the TT. Which surprisingly perfectly matches my estimations for our field in general! It may be a nice coincidence, or a case of convergent evolution. Or maybe some clever people in the administration consciously try to keep the department about in the middle of the spread (which is usually a wise thing to do).

Still, it's kind of funny. 11% doesn't sound like much at all. That's a competitive field, huh?

Friday, December 21, 2012

Mayan vs Aztec

Google for "Mayan calendar" images. Get this:

Now google for "Aztec calendar" images. See the same thing:

In reality this famous thing is called an Aztec Sun Stone; it obviously is not Mayan at all, and, reportedly, is not even quite a calendar. Yet in the wake of impending apocalypse everybody screwed up their homework, and misguided each other. Straight it up people, before it is too late!..

Links 2012-12-21

A great summary: What advice would PhD people give to their former grad-student self.
More than a dozen people shared their thoughts. The result is both thought-provoking, and inspiring.

You are having a TT job interview, and they ask you if you have any questions. A nice list of questions to consider. And also - what to say if they ask you "tell us how you think you will fit here" (or rather how to prepare for questions like that, and what they really mean).

How reading random stuff, and not being too busy, are both so important for scientific success.

Thursday, December 20, 2012

Guns and probability

Let's solve a simple (but unpleasant) probabilistic problem.

  • You are in a room with 19 other people (so 20 in total). One of them turns to be a "freak".
  • If a freak has a gun - they start shooting people until nobody is left.
  • If a "normal person" has a gun, they shoot the freak, and save everybody.
What is the probability of dying in this situation, as a function of gun availability in the country?

Let's assume that the probability of having a gun is the same for everyone in the room, and equals p. Then the probability of the freak being the only armed person in the room is given by the formula d = p*((1-p)^(n-1)), where n=20. If everybody carry their guns openly, and the freak is rational enough; or if "normal people" manage to always kill the freak before the freak kills anybody, this formula will describe the probability of death in this situation. It will obviously go through a maximum, and then decline back to zero:

Let's assume however that "normal citizens" don't shoot the freak until they are 100% sure that this guy is actually a freak. So the freak always succeeds in killing one person, and only then they are stopped by a "militia" member. Then the curve would look slightly differently, because now the probability of death is 1 (for sure) if there's no other armed person in the room, but it is still 1/20 if there are militia members there: d = p*((1-p)^(n-1)*1 + (1-(1-p)^(n-1))*1/n). This is assuming that you are always a "good citizen", and never a freak.

And this formula suddenly makes some sense. It is possible to decrease the probability of shooting sprees by increasing gun ownership. But at some point the shootings will become that frequent, that even if each of them is stopped almost immediately, they will still take a toll. Because in each of them at least one innocent person will be killed. You change the assumptions, and the parameters, and the curve will move around, but the idea will stay the same.

- "Well", - the right-wing person would say, - "but people never turn freaks right in the room; they usually turn mad while at home, and take their time to prepare. What if a freak is 5 times more likely to find a gun than a "normal person", because a freak is actively looking for one?" In this case, indeed, the only way to stop the freaks is to give everybody a gun, because d = (1-(1-p)^5)*((1-p)^(n-1)*1 + (1-(1-p)^(n-1))*1/n).

In reality however all freaks are different, and while there are some who will sale their belongings and prepare for years, most of them will probably kill other people only if given an opportunity. The harder it is to get a gun, the fewer freaks capable of doing so you will find. For example, if on top of "purely opportunistic freaks", as described above, you'll also find 1/2 as many freaks that are twice more persistent in getting a gun, 1/4 as many freaks that are three times as persistent, etc., you'll end up with this curve:

And here basically, in essense, we return to the curve #2. If everybody has a gun - people die all the time. If you reduce gun ownership, rate of murders go down. At some point however you may feel helpless, because if 10% of people walk around with a gun in their pocket, sprees will still happen regularly, and they'll be already quite deadly. That is essentially the situation in any school, or any mall, where the majority of public obey the "gun free zone" laws. However, once you drop the average gun ownership rate below a certain point, gun control becomes the only efficient way to further reduce the casualties.

Wednesday, December 19, 2012

Unaccessible citations

Science is important, but if I have two citations on hand, and one of them is easily accessible, while the other one is not, I'm going to cite the accessible one.

I feel a bit sorry for the author (maybe they did not have any choice). I also realize that most probably I'm not "punishing" the publisher at all, because they simply don't care (especially in case of old publications, when citations do not affect the impact factor anymore). But still there's an emotional component to it. You're not giving me this 1975 paper, even though you clearly have a PDF version of it, and our university has access to your jornal? Fine! I'll just cite the latest review!

At the same time I wonder if there's any incentive at all for the publishers to put their old papers online. Right now I can not think of any, and it is kind of sad. Without references to old papers new publications become kind of boring: too mechanistic, arrogant, and shallow. It's like a case of anterograde amnesia, when a person kind of keeps the discussion going, but does not remember anything about the global picture anymore. Also in behavioral sciences even papers from 1960s may be directly relevant to your current research, because animals were the same back then as they are now, and the descriptions of their behavior are probably still quite valid.

So, while for new papers the invisible hand of the market can, in principle, encourage authors to publish in open-access journals, old papers are simply left behind.

Wednesday, December 12, 2012

Ideal number of PhDs: is it better to die early, or not to be born?

Job market is like weather: not that it changes as frequently, but as weather it makes a nice and safe discussion topic. You met another postdoc, and don't know what to talk about? Just whine together about the bleakness of job opportunities. You'll both get depressed, as a side effect, but at least you'll avoid the awkward silence.

When talking to people, I like asking them about what would they do to improve the academia, if they had the power to do so. Here are the axioms (the rules of the game):
  1. The funding is fixed (increasing funding is an entirely separate topic)
  2. Number of people willing to do science is fixed (it's not actually, but again, for the sake of simplicity)
  3. You want to increase the scientific output (or at least keep it at the current level)
  4. ...while increasing the "average happiness" (whatever it means)
So, what would you do?

Surprisingly, there's no common theme among the answers I receive. The only pattern I could spot is that people generally call for making the bottleneck they have just crossed a bit tighter, while loosing the one they are facing. Thus grad students call for decreasing grad school enrollment, while making graduation more "guaranteed"; postdocs think that "we need fewer PhDs", to reduce competition for the TT positions, and so on. I also follow this pattern by the way.

But this made me think about the "best theoretical solution" to this "bottlenecks distribution" problem. Does it exist at all? If you follow a reductionist approach, and, for the sake of simplicity, concentrate on the "selection process" for future TT faculty, what would be the best "extinction process" to use? If you were the King of NIH, when would you get rid of them lazy bastards, people who will never become PIs? Of all those who foolishly dream of becoming TT faculty, how many would you push out of the system at each respective year of their evolution?

According to my estimations, the current actual "Extinciton chart" for Neuroscience looks somewhat like that:

About 50% of applicants get accepted to grad schools (I assume acceptance rates within each school of ~10%, and each person applying to ~5 schools). 5 years in grad school (I should have assumed more actually); 50% graduate with PhD; then 7 years of post-doc, 20% of postdocs get a TT position.

Is it optimal? Obviously, the answer depends on your definition of "optimal". If you try to increase scientific output (without restructuring job descriptions), you would have to keep them all till the last moment, and then suddenly fire. If you want to decrease spending, you'd have to admit to the grad school only ~5% of applicants, thus making the tenure track more or less guaranteed. If you want "productivity" in terms of "Impact-factor-per-buck", you'll have a complex interplay of different factors, and lots of assumptions about salaries, talent distributions, education investments, etc.

But this dichotomy is obvious. What I find much more interesting is another type of an inherent contradiction here: that between Happiness and Fairness.

The "average happiness among the young folks in academia" would obviously decrease, as you increase the competition at the later stages. If you keep everybody in the system for 15 years, and then perform one giant massacre, people will be extremely unhappy about that. You may think of unhappiness being generated when people are forced to leave academia: the more of their life-time they invested into this thing, the more they sacrificed for it, the more unhappy they will be. Or you may think of it as of frustration that is linearly generated over time, in view of the decimation bottleneck to come. The math is the same in both cases: the earlier you fire people, the less they lose. Thus the system in which only 5% of applicants are accepted into grad schools would produce the most "happy" (in terms of being relaxed) academia.

But this system would also be the least fair one! Because 5% admission at the level of grad school would mean that the admissions would be quite arbitrary. What would you base your decisions on? GPA? Undergraduate education? Scholarships and early publications? All these points would essentially translate into one: "pure luck of being born in a wealthy family with good connections". Older applicants, women with kids, foreigners, lower-class applicants would all be judged against, just because they would be perceived as "more risky" investments.

It is kind of similar to the heritability of IQ issue: the later in life you measure the IQ, the more heritable it becomes. In elementary school kids IQ is almost entirely explained by the environment that is "forced" upon them (bad school, good teacher, etc.), but as we move on, to older subject, IQ gradually starts to become more and more "heritable", with up to ~75% heritability in adults. The common interpretation of this fact is that as you move on, more and more aspects of your life become a result of your personal choice. We all gradually become self-made people, in both good and bad. Granted enough time, we settle kinda where we deserve to be. At the age of 50 your pre-school experiences are largely irrelevant for your success.

And that's why the later you decimate the postdocs - the more fair the system becomes. If a person doesn't have a Nature paper by their 3d postdoc position, it's not just "pure luck" anymore. They had a chance to choose a  grad school; a PI there; and then, say, 3 different labs. They had a chance to either upscale, or downshift. The luck is still important, but while having 1 failed project is quite normal, having 10 of 10 projects failed over a course of 10 years would probably reveal some kind of a pattern.

This being said, if you think that the grad-school-to-postdoc environment is unhappy and depressing, please, do realize that it means that it is at least decently fair. Trying to make everybody in the system "more happy" at the expense of decreasing the in-flow is likely to make the system less fair, less productive, and more frustrating for those worthy people who would be left behind due to their suboptimal early life experiences. Your gambling on your future, and your uncertain career perspectives right now, are a reflection of the fact that you have the liberty, the chance to take a risk. In a no-risk environment, you probably would not had a chance to try.

Monday, December 3, 2012

A Management Tool every PI should Know and Use

Let's face it: every PI is a manager. PIs have people working for them: people for whom they are somewhat responsible; people who can screw up in innumerable ways, who can suffer, despair, get lost, or make the lab a toxic environment. PIs are managers, even if sometimes they don't want to admit it.

And this is a nice thing, really, because it means that every PIs has access to all these techniques, tools and methods that managers in business were developing for years. All these "management tools" that work, and that are used every day by hordes of tie-wearing "managers" in cubicles and offices all over the world. These tools are effective, simple, well described, and "ready to be served": even if their names and some of the descriptions may sound cheesy to an average academic (especially to a "hard science" one). For a PI to reject these managements tools just because they originate from a different subculture is exactly as irrational as for a business manager to get engaged in some uninformed self-medication. When you need to treat a medical condition, you find a doctor. When you need to manage people, you read a book on managing people. It's a good and worthy thing to do.

Anyway, let me give you a practical example. If I were to pick a single most important management technique that every PI should learn and use, it would be the "One on one meetings". Now, even if you mentally roll your eyes, and think "Gosh it is so boring...", don't close the page yet, but rather read. It is important.

Consider this obvious statement: people who are easy to manage, don't actually need to be managed. They are doing just fine: they catch most hints, they usually keep their promises, and they can always come to you with questions, when they have any. You don't need a special technique to interact with them! But surely there are some people in your lab that need guidance, support, or control. People who tend to generate problems of some sort. And you know what? It's hard to manage them, and thus it is unpleasant, and that's why you don't like it, and are avoiding it at all costs. Maybe you dread talking to them about their progress, because they get so defensive that your head aches. Or maybe they are easily offended; or lie to you; or find hundreds of excuses every time you ask them a question. But in any case, by now you may be habitually evading any serious conversations with them. You tried to use jokes to send them a hint, but they don't get hints, and the situation is slowly getting worse. Most probably they also don't come to you with questions about their progress, because these discussions are unpleasant for them as well. So here's the point: those people who need your management the most, are the ones that will never get it. Unless you consciously do something about it.

And here's a good news: many of them can be salvaged through a simple, but structured management process. Actually almost everybody can be salvaged, just at some point you will hit the cost / benefit ceiling of a sort, when the effort won't justify the outcomes. Still it's good to give it a try. Everybody are born clueless, but most people pick it up, and require much less care as time goes on. Try to put it on a right track from the very beginning, and most probably it will only become easier.

- So, what do you want me to do?
- Establish a sequence of regular one-on-one meetings. Put them in the calendar. For those in your team who are "doing fine", do them every quarter. For newcomers and people who may be slightly lost, do it monthly. In most immature / critical cases do it once every two weeks. Reduce the frequency as the situation improves.

- Why would I make them formal? I hate everything formal! What good is it? They can come to me any time, I'm not hiding from them!
- First, they won't come, because they either don't know they need to, or they are scared. Call them. Second, make it formal, and book some time for the meeting, like 30 minutes or so. The benefit here is that surprisingly it will make your discussions much, much easier. Talking to a person about their progress is always awkward. Some people hate saying bad things; some people have hard time saying good things, because it just sounds stupid! Why would you suddenly, in some random hour of some random Wednesday, start praising, or criticizing anybody? And here's where a scheduled meeting helps: you sit together, and you have to talk good things. And you have to talk bad things. You have scheduled the time, came to the room, closed the door, it was all really awkward, and thus the quota of awkwardness is already met. Now, as you have to talk about the person's progress, you'll be able to do that. Because you've staged it all correctly.

- But what the heck will we be talking about? It's all clear, and I never make my opinions secret! I always share them in some form or another! How would one more repetition help?
- Again, most probably you did not really share your feedback "openly", even if you think you did. You made some comments here and there, but you never combined them all into one picture for the person to see. And there were other people around, and you might have adjusted your words a bit, or at least it sounded that way. The person may have not heard you. They may have thought it's a joke, or an understatement, or an exaggeration. It's surprising how effective the words could be if they are said openly and simply, behind closed doors, one on one. You look them in the eyes (or you don't - I hate looking people in the eyes actually), but for the matter: you say them what you need to say. "This is good. And I mean it. This is bad. And I also mean it. Now that's what we do next." People need that. Most people need that.

- Still I don't know how it can be useful at all, because what would happen is that we'll have the same conversation again and again. It will not work, because adults can not be changed.
- You're not trying to change them, you're trying to change their behavior. And that's a much easier thing to do. There are two tricks that will ensure you don't have the same discussion happening again in every meeting. First: you'll introduce some measures, and some deadlines (or target dates). You'll be quantitative. And second: you'll write down your agreements, and you'll share them after the meeting, by e-mail. This way:

  1. In the beginning of each meeting you'll check your notes from the previous one, and compare the actual situation with the one you planned / agreed on. You'll start with the facts. How many experiments were done? Where's the paper? How is the figure doing? Rig construction? Training? Certification? And don't get lost if your initial estimations were wrong. You'll correct them if necessary, but at least you'll have a starting point for a discussion, and a seemingly objective one. It's OK to make mistakes and miscalculate everything, especially in science, where experiments routinely take 10 times longer than they were supposed to. But it's much easier to disregard the numbers when you have them than to invent the numbers on the spot.
  2. Your trainee's words will become a promise, and they will have to provide explanations if the "actuals" don't match the "projection". They will become more accountable. If they disagree - they should ideally disagree in the meeting. Once they promised to do something, they are supposed to keep the promise. If something changes, you either discuss it at the next meeting, or they try to find you in between, but at least they won't be able to say that "they thought it's OK", or "they forgot", or "they did not actually mean it". No, guys, if you agreed with something, do it. If you disagree, say it now. Don't assume: ask. Get certain about what you both mean, and clarify the misunderstandings upfront.
  3. The opposite is also true! If you said something, you become accountable for your words. And that's a good thing, as your trainees won't have a chance anymore to say that you promised something and did not do it, or that you changed the scope of the project without telling them explicitly, or that you never told them they are doing it wrong. It's on paper now. 
  4. If worse comes to worst, and your trainee doesn't perform, these papers will help you to part with them without feeling bad and cruel (as you'll have their performance documented), and without looking cruel in the eyes of the broader community (again, because you have it documented). I'm not even going into the lawsuits topic here, but you can extrapolate if you wish.
  5. If worst comes to even worst, and you get insane, and really start changing projects in your mind without telling anybody, your trainees will have the documents on hand that will prove their performance. So at least their future won't be screwed. You see: it's your mutual protection! It's good!
To sum up: Regular, scheduled meetings, with target dates, numerical measures, and after-meeting recaps make your interactions with trainees transparent and clear; make both of you accountable; give you an opportunity to address any issues early (including the sensitive ones that nobody would share in the "lab meetings"), and also protect both of you in case of a conflict.

References: here's a really nice set of podcasts about management techniques. Note the topics (and names) of 2 first podcasts ever recorded (in year 2005!).  It's indicative.

PS. If you, my beloved reader, are not a PI, but a postdoc, graduate, or an undergraduate student, all that was said here still applies to you, only in reverse. If your PI doesn't have a process like that on hand - for God's sake - force them, or trick them into it. Meet with them regularly (even if they don't realize you're having one-on-ones). Send them notes of your discussions (even if they don't read them). Update them on your status, and demand a feedback from them (whether you meet the expectations, or exceed them, or do not meet them - in which case do immediately learn why, and how you're supposed to meet them). If your boss is overly delusional, you'll know it now, not 4 years into the grad school. If your boss is mistaken, you'll clear the misunderstanding now, while it is still small, and neither of you had built an immune response against the other. Do it, it's useful!

Friday, November 30, 2012

More on the Impostor Syndrome

One of the responses to my "benchmarking charts" was that essentially I am a "Postdoctoral baker agonizing over meeting the metrics instead of working on what really matters". The word "baker" here is referring to a metaphor of building your career through following a rigid "recipe" instead of freely and creatively improvising.

Me - a baker? Agonizing over the metrics? Ha-ha-ha-ha! Ha...

...Well. Yes, I am agonizing over it.

But: I am not quite a baker, simply because my pie is long ruined. I have no papers from my grad school (or rather I have 3 decent papers that nobody cited, and probably nobody ever will, as they were written in Russian, and published in Russian journals. They are translated, and even indexed in Pubmed, but it doesn't help). My university will never send my transcripts when I am applying for jobs, because Russian universities just don't do this kind of things, ever. And my transcripts are in Russian anyway (I have a translation of course, but still). I was not doing science for 5 years after getting my PhD, for that reason or another. My pie is ruined, and the only thing I can do now is to be creative about it, and to try to transform it into some kind of a stew, or a shepherd's pie maybe... Why not? Remove the crust, add some water, some celery, make the roux, pour it on top... Everything is possible! Also come up with a nice name for this dish. Claim it to be a good example of the traditional Zanzibar cuisine. Nobody can verify it! Improvise!

Thus for me the metrics is only important because it gives me the lower threshold, and the ideal target. I try to prepare for something modest and low, while aiming for something high and clearly unachievable. In a hope to fall somewhere in between. That's the strategy.

And on the impostor syndrome: you know, when teenagers fall in love they often don't understand that a rejection does not always mean that they are bad, awkward or even unpopular; it does not always mean that they are "a failure". Quite frequently it just means that their crush is not smart enough to see them, and to appreciate them as they deserve it. If they care about your skin color or social circle, are you really sure you want to be with them, to meet their family, and their friends? Really?

I believe that the same, at least to some extent, applies to job searches. If a company doesn't hire you because you're not boring enough, I'm not sure you'll be happy working for this company. You may give them another chance, and even the third one. But at some point you just have to give up on saving them. And look for a different place. So when applied to science, I try to convince myself that it is not the Academia evaluating me. It is me putting it to a test. If I work really hard, and publish as good as I can, and learn to write, and network, and collaborate - will the academia be fair enough to notice that? If yes - well, that would be nice. If not - there are other options.

And also it definitely has something to do with this "Scientist as a monk meme", or with a problem of "sacrificing your family for your career". There's no point in doing a postdoc if you don't like being a postdoc. There's no point in expecting the future to reward you for your sacrifices. Live it here and now. Try to have fun with this Science thing. If at the next step it will be rewarded - good for them. If not - move to another state / country, shave your head, go under your middle name and start it anew. Also write a memoir!

Wednesday, November 28, 2012

Cumulative Impact-Factor Benchmarking

Speaking of CVs, publications, and impact-factors. Some time ago I got pretty anxious about this whole publications benchmarking story. You know, when some people say that "everyone should publish at least one paper a year", or somebody mentions in passing that "nobody is hired without at least 1 glamours paper", or "second-authors do not count", and so on.

So I decided to do some research myself. I did the following:

  1. Identified some people in my field who did something remotely similar to what I do, and who are or were on the job market within last ~5 years.
  2. For each of them, I downloaded a full list of their publications as undergrads, grad students and postdocs, as that's what they showed (or are showing) on their CVs when looking for a job.*
  3. For every publication I found the impact-factor of the journal it was published in.
  4. I discounted 2nd-author papers and reviews by 75% (so 4 second author papers = 4 reviews = one first author paper). It's obviously a wild guess, and an oversimplification, as the formula would not hold at extremes, but overall it's probably about right.
  5. And finally, I calculated their cumulative impact factor. And then plotted this value vs. years that passed since they got their PhDs.

Here are the results. Black lines represent those who got their TT positions in really cool (glamorous) places. Brown lines indicate successful landing on TT positions in quite decent places (universities, colleges). Blue lines are for those who either got a non-TT positions, or only got some really terrible (unacceptable) offers in some weird places, or did not receive any offers so far.

What do we see here? A bunch of stuff!

  1. To get any kind of a TT position in my subfield you need to reach a threshold of about 60 cumulative IF. That's either 2 publications in Nature, or 15 Plos-ones, or anything in between**.
  2. You need to get it in about 12 years including grad school. A gentle slope means asking for trouble.
  3. Glamorous papers (those sudden jumps in the cIF) do increase your chances, but mostly because they pump up your cIF. Although one can argue that they also improve your image (see that black line among the brown ones, with a distinct CNS jump).

I personally aren't on track yet, but I have some chances to get on the brown tack, if only the papers I'm working on now are published properly (in good journals).

* Practically speaking, I took all papers published before they got their first last-author research paper; plus any non-last-author research papers published in 2 years after that. (This additional complication is necessary, as apparently many people publish their last postdoc paper already after publishing their first PI paper. But I assume they still had it shown in their CVs as "submitted"; thus the adjustment).

** Update: No doubt, the "threshold" will be very different for different fields, and even subfields. My goal was to benchmark myself against those who would have been my peers, had I started my career some 5 years earlier. It would be really great if somebody could make a personalized online benchmarking tool like that, for everybody to use, for my web-programming skills are just not good enough for developing it. If you can do it - please, do it!

Tuesday, November 27, 2012

On the Impostor Syndrome

Over the last year I read a lot of excellent blog posts about the Impostor Syndrome. This famous feeling that all scientists have every now and then (young folks especially): the feeling of being the most stupid person in the room (or in the department, or in the field). When you go to a scientific seminar, and have no idea what the person is talking about. Or when a person says something that you think is either boring or insane, but suddenly everybody start asking serious, thoughtful scientific questions, and you realize that it is probably you who are not really a scientist. Or when you have a great idea for an experiment, only to find that it was done back in 1976, and your PI references it in a good half of their papers. Or when you meet your peer at a conference, only to realize that they have 3 Nature papers, and 2 job offers, while you do not exist as a human being. The occasions are endless.

And while there are ways to fight the impostor syndrome, and some of them are really cool and important, what I find interesting is how this whole phenomenon looks in the light of another topic, the most popular topic for midnight academic conversations. That of the overporoduction of PhDs, in comparison to the TT positions available.

Based on my original research, of every 10 people entering grad schools in Neuroscience, only one will get a "normal" tenure track faculty position. Well, do you realize what it means?

It means that when I feel a surge of the "impostor syndrome", statistically speaking, I'm just feeling the truth, as it goes down the spine. Statistically speaking, the most plausible hypothesis is that it is not the "syndrome" at all, but just the reality looking in my eyes.

But if it is true, and I am really an impostor, what should I do with it? Can I still survive in academia, at least for a while, without getting depressed, even though statistically speaking I know the truth?

One solution (and I've seen some people following it) is to claim that everybody are impostors, and the question of getting a TT position is that of pure luck. Well, I don't like this idea, because I find it almost as depressive as the "simple solution" of leaving the field immediately. I don't like raffles and lotteries. If it's pure luck, then not only I'm an impostor, but also the world around is more unfair than I can handle. I want to believe that people who get TT positions usually deserve it. Even if I won't make it there, at least I'll know that the world is in good hands.

My solution so is to pretend that being an impostor is an inherent feature of science. It has something to do with the importance of stupidity in scientific research. Scientists are impostors by design: because they venture to describe and explain something that cannot possibly fit into one person's head. And so this whole science affair is a giant Mardi Gras procession of impostors. Quantitatively some of them are more efficient then the others, but qualitatively - all are alike.

Which means that "Fake it till you make it" is not just a saying, or a joke, but actually a viable practical advice, and the only solution to the problem. To boldly pretend something that no man has pretended before. Coz having dirty hands makes you right, and who cares if you are really a "wrong person", as long as you do what a "right person" would have done on your place.

And as for the career perspectives... I'll think of it tomorrow.

Monday, November 26, 2012

Introductory Neuroscience Links

Here I'm posting my collection of links that may be useful for teaching introductory neuroscience at a freshman level. I collected them last year, when preparing for my summer course, and while this year I'll try to update it, the core is likely to remain the same.

One major change in my attitude this year is that I grew somewhat tired of TED talks, as I've simply lost trust in them. TED is just not peer-reviewed enough, which is quite of a problem for science topics. In a popular science talk a presenter has to oversimplify the facts, and also to explain them in some way, even if scientifically speaking these "explanations" are still at a stage of being highly speculative theories. And the audience will never know that. The situation is kind of awkward, because both the simplification, and the "explanation" are required part of the packaging that make the talk popular, and the information - digestible. They have to be present in a good talk, because you have to explain to people why your research is important, and what all this stuff could actually mean. The problem however is that the world of science is so vast and specialized that even scientists themselves often have a hard time distinguishing a mainstream scientific star from a passionate but weird marginal, unless the talk hits on the listener's immediate field of research.

But still I'll provide at least some links to the TED talks, because I want my student to improve their presentation skills, and TED talks I've selected are rather good in this regard.


Free neuroscience textbooks:

My favorite series of lectures by Robert Sapolsky (playlist of 25 hour-long videos):

Some TED videos:

Youtube case presentations:

Transient global amnesia:

Bipolar, both phases in same patient:

Split brain:

Broca's aphasia:
Wernicke's aphasia:

in childhood:


Absence seizures in children:

Parkinsonism + Deep Brain Stimulation (before and after in each video):

Dystonia and Deep brain stimulation:


Tuesday, November 20, 2012

Xenopus Thanksgiving Card

PI working hours

One of my colleagues made the following statement today: the PI, they said, should serve as an example to the lab. And thus the PI, at least theoretically, should be the first person to come to the lab in the morning, and the last person to leave it in the evening, thus making students/postdocs stressed, and encouraging them to work better.

It's hard to convey how much I disagree.

First of all, such a PI would not have any normal life outside the lab. And as a person aspiring (maybe delusionally, but that's a separate topic) becoming a PI one day, I don't want to live in the lab. Working a lot, and working after hours is fine. But not too obsessively, you see; not in a robotic fashion. Not just staring into the screen 24/7.

But maybe even more importantly, such a workaholic PI would not be a good example for the younger scientists. When I worked in P&G, the young managers were explicitly told that working long hours is fine, but one should be always aware of the fact that the employees will look at you, and benchmark, and feel bad for leaving the workplace before you. And that they'll just stay there unproductively, for the sake of not making you see them leaving; and they would suffer, and burn out, and start hating their job. Which is not something you want your team members to feel.

So we were told that if we need to regularly work till midnight for some reason or another, it is advisable to find a conference room, and to hide there. Also switching the work instant messenger off. So that no one of your direct reports would know that you are still working.

Because that's the point: while for some people working long hours is a result of their passion, for most of us working long hours is a consequence of bad organization, and bad time-management. The self-perpetuating vicious circle of procrastination. Which is even more painful in science than it is in those more predictable jobs I used to have: it is easier to procrastinate in science, for so many reasons. Because the things you're supposed to do are so much more vague; and because the results are that slow to come; and because you're supposed to think every now and then, which may look superficially similar to day-dreaming...

Anyway. In my opinion, the ideal PI should go home exactly at 5 (or whenever the working hours happen to end). And then secretively work at home if they wish to. And still be productive and successful. Being productive in 8 hours of work per day, weekends excluded, - that's the inspiration, and the model behavior I'd like to see. I really want to believe that it is possible, and I need somebody to demonstrate it to me on a daily basis!

Thursday, November 15, 2012

On blogging

Suddenly I realized that writing here (and also writing comments to other people's blogs) is mighty hard for me exactly because I blogged for so long in Russian. And not even because of the language (it's a problem, but an unrelated one). It's because of the difference in cultural biases and assumptions.

In Russian I am habitually controversial and provocative. But I can afford that exactly because I know what the assumptions of my readers are (on average at least), and what they are taking for granted, and what they can tolerate, and what they can't. Every now and then I make a mistake in one direction or another, but overall I know where we are, and where I'd like to lean, so I do it.

But when I try to write in English, suddenly I find that all my "default cultural settings" are wrong! I'm not even sure if what I say is acceptable at all. Because I can never be sure people would understand me. Let me give you one example: I'm really interested in human population genetics, and in human evolution is accelerating over last several thousands years, in ever increasing speeds. And how weird and unpredictable our evolution has become, with all these cities, diseases, personal choices, economical considerations, etc. It's fun, it's interesting, and I think it is a good topic (even if provocative) that can be discussed.

And I know for sure that it is being discussed in some way or another. I know of a great blog on this topic; I know of some books about it. It is possible.

Yet when I feel like saying something, or even worse - try to say something, it turns out quite awkward. Like in these discussions on PhDs having, or opting out of having kids for example:

I really wonder what the effects of "PhD being the best contraception" could be, in terms of genetic drift. But at the same time I am aware of the long and uneasy history these kinds of questions had in the US, with all this eugenics and other horrible stuff. So can I even muse on the impacts? Or is totally socially unacceptable? On those instances when I talked about human evolution with fellow scientists sometimes I got pretty harsh rebuttals that I feel I did not deserve. And my guess is that mostly it happened because of the different baseline assumptions. It's funny and sad at the same time.

Tuesday, November 13, 2012

Should I reference it?

I'm now writing a paper, and in it I'll be using a certain FDR (False Discovery Rate) statistical procedure. It's a clever and not-too-conservative way to adjust for multiple comparisons, and to keep P-values in check. You should absolutely use it in your work if you have not being doing it yet:

But what I don't quite understand here is whether I need to reference the original papers in which the FDR method was described for the first time, or not.

The method is not too old: both papers that justify it were published in 1995 (see 2 first references in the Wikipedia article). At the same time by now this method is kind of known, and used, and based on the Google Scholar statistics the firsts paper of these two has an impressive number of 15582 citations. That's a lot! Does it mean that I can afford not referencing it?

Also would the fact that the Wikipedia article is that nicely written, and comes as the first result in Google Search, affect my decision?

Generally, what are the criteria? When do you stop referencing a methodological paper like this one?

Friday, November 2, 2012

Xenopus laevis tadpole neuroscience art

Here's an animated gif based on my tadpole painting from 2 posts ago. Now I feel pity for not being fully scientifically rigorous about this circuitry thing. It could have become a really funny educational material.

But anyway. Tadpole of Xenopus laevis, with most neural circuitry I care about: projections from the retina to the optic tectum, and then from the tectum to the hindbrain. Reciprocal projections from the hindbrain that pass somatosensory information and that from the lateral-line back to the tectum, for multisensory integration to happen. And then also some downstream projections from the reticulospinal neurons to the spinal cord, and the cycle pattern generators there. And all that - in one animated gif! I'm somewhat proud of it =)

Thursday, November 1, 2012

How to fight procrastination?

How to avoid procrastination? I don't really know. I've just spent around 2 days looking for a software to build a knowledge base (this search failed; the software of my dream doesn't exist, so the search was essentially vain and counterproductive). And then about 1 more full day (in total) blogging about neuroscience of homosexuality in Russian. (Why? Why? It's not even my topic!)

But introspectively I can list the following reasons (or maybe rather ways) I procrastinate:

1) The first reason is the simplest one: I try to avoid work; try to find distraction. Any kind of distraction!
2) I seek news; any kind of novelty; any kind of information that was generated recently, that is "hot from the press". As if to prove myself that the world is still moving on around me.
3) I seek interaction with other people. To prove that I'm not alone.
4) I seek praise from other people. To feel that I'm useful.
5) And sometimes I'm just too tired to keep working.

Normally I would cover all 5 points by procrastination, and mostly of Internet kind: checking the e-mail, blogs, twitter, facebook, Reddit, wikipedia watch pages etc. And it's bad. It should not be like that.

Some people (theorists and mathematicians) can afford working without a computer, with just a paper notebook. Or at least unplug form the Internet. I usually can't afford it, as I have to check Pubmed, or google for Matlab tips and tricks all the time. So I need to find another solution.

After SfN I successfully (at least so far) quit Reddit, which was by far the worst way of wasting time for me. Reddit is a great tool; I quite successfully used it for conducting science-related polls, advertising some of my work, and I learn(ed) a lot form it. But it's just way too demanding. I can not afford it. I also stopped checking Twitter and Facebook, by consciously making the feed unreadable. I unsubscribed from most of Google+ alerts. In total it should help with Reason #1, at least for a while.

Instead, I subscribed to some "new publications" kind of alerts at WebOfScience and Pubmed. Maybe it will satisfy the craving for the "hot" information (Reason #2).

Reasons #3 and #4 aren't a problem when I'm teaching, or when I am to present, or have recently presented my work outside the lab. I still don't know what to do "in between". Maybe developing this blog can be a solution.

And for the #5 - I will probably try alternating tasks. Can not program any more? Read some papers. Can not read? Reground the rig. Can not work with the rig? Read some science-related (but really science-related!) blogs about giving tenure talks (as if it were relevant) and what not. We'll see how it works.

Wednesday, October 31, 2012


We'll have some kind of a competition here in the University; something about Science and Art, and how they interact, and how one can be used to express the other, or something like that. I'm not quite sure. Anyway, I decided to participate in the competition, because why not.

This thing will be my submission (pen is given for scale). It's drawn with a sharpie pen, acrylic paints, and a bit of brown colored pencil at the very end. It's probably titled "The known circuitry of Xenopus tadpoles". The circuitry shown here is only partially real; I mean, it's mostly real, and should give a correct idea, even if not being completely 100% accurate. And real tadpoles aren't green of course (they are quite transparent), but green tadpoles just look so much better and watercolorish! But it's definitely art about science - they can't deny that.

Well, we'll see how it goes.

Saturday, October 27, 2012


I love the description:
Superdocs are the suddenly-graying, tired-eyed waifs who you see in the hallway sometimes but never at seminars. Research fellows are non-faculty that get their own lab space (maybe with 1 tech and 1 student). They are usually crying in their offices. (source) 
It's a bit exaggerated, but the imagery is insightful. 

Tuesday, October 23, 2012

Battle of personal wikis: continued

So, my current assumption is that I need a personal wiki to maintain my personal knowledge base. I really like the idea of cross-linking topics. I also really like the idea of keeping sources (papers) as separate entries, and linking my higher-level cards (ideas, objects, concepts) to them. Finally, if I need to write something, I can always create a separate wiki-tree for it, with a contents, and then some "chapters", linking to the "thoughts" created previously. And then, while writing the text, I can go through all these pages, gradually turning them into a text.

I read a wikipedia page on personal wikis, and decided to give a try to wikidpad as an alternative to One Note. Well... The good thing is that it can save to html really nicely. The bad thing is that the interface is not WYSIWYG, but rather a wiki-style coding thing, and it makes everything somewhat harder. But the worst thing is that in this particular program legal wiki entries should have a "CamelCase" format, with capital letters and everything. That's not what I need as many my terms will look like "AMPA" or "notch". Too hard. Won't work.

OneNote can work as a wiki, and you can "Create Linked Pages" from a word you selected, but it wouldn't autolink words for you, and actually even manually linking your words to existing pages is not that simple. If you have a page named mTOR, and then you type mTOR and want it to link to this page, there's no easy way of doing so. You'll have to go and manually find the page, and copy a link to it, and come back, and add it as a hyperlink. And to make things so much worse, while there's a plugin that allows you to export your notes into html, these within-text hyperlinks are not exportable.

So far the most promising personal wiki app is the one called ZIM. It's wysiwyg, exports to html, is simple to operate, and looks neat. Probably I'll give it a try.

Index cards (knowledge base) software

I'm thinking of writing something cool and long, like a review. And it would be nice to do it properly, organization-wise. But I'm not actually sure what would "properly" mean here. I guess the general question is: how to maintain your knowledge database; how to keep in order all those various thing you learn from the literature, and how to make it usable when you need to write something about it.

An ideal system would probably have the following features:

  1. It should look somewhat like a bunch of index card, so that you could put ideas, statements, thoughts, quotes and sources there.
  2. But it should be fully computerized, as I don't want to handle any real paper objects.
  3. Ideally - you should be able to keep both your writing (notes, ideas) and your long-term stuff (quotes, sources) in the same system.

    Then also go some technical considerations:
  4. I should be able to make back-ups easily (have control over the data)
  5. It should work offline (as I'm sometimes too distracted when online)
  6. It should be forward-compatible (it would be silly to start creating a knowledge database in an abandoned obsolete software).
  7. You should be able to "publish" your final text in a readable editable form (as a long Word document or something of this kind).
So far I have the following options as potential solutions:
  1. Scrivener. I kind of like what I hear about it, but I'm not sure it would be good as a knowledge database.
  2. Personal wiki (following this great advice). It provides nice functionality (you can really organize everything properly!), but I'm not sure what software to use for it.
  3. One Note. This Microsoft office tool turned to be really fantastic! And also it is de-facto free, as you get it together with the Microsoft Office, with Word and Excel, that you have to have anyway. It actually combines the index cards look with personal wiki functionality. So far that's the potential winner. There are some drawbacks though: you can not export the whole notebook as one long text document; it is not easy to edit texts longer than 1 paragraph; and most importantly: I'm not quite sure it's going to be supported by Microsoft in the future, as it lies in complete obscurity for several years already. It's really good, so why would not they advertise it more? It's kind of unnerving.
  4. Endnote. Doesn't provide index cards functionality (interface). Also a huge drawback with this one is that I down't own a license (it's expensive, and is provided by the university), and thus I can not really invest into it.
  5. Mendeley. Seems to be a nice free alternative, but the "Notes" field is too hard to access, so - no index card functionality again.
  6. Access (personal database). Seems to be a bulky, but viable solution.
  7. Excel. That's what I'm doing so far. It is scalable, portable, higly visual, and generally very nice. The problems are: you need to keep your notes short if you wan't them to be easily exportable; it's hard to organize a 3-layered structure of "Topic / Idea / Sources" that I envision.
So overall I'm still undecided. I would probably go with a simple text-based personal wiki if only I could be sure about forward compatibility and future support. This thing, this database, would essentially become my external brain and external memory. I don't want to lose it 2-3 years from now for some stupid technical reasons.

Friday, October 12, 2012

Xenopus Tadpole drawing

To make my SfN poster prettier I decided to draw some tadpoles by hand, and include them as some ancor visual elements into the poster design. This one will be the biggest.

Thursday, September 27, 2012

Two links

An alternative abstract browser for SfN 2012:

A great motivational blog about how to become successful in science. And I mean it: it really seems to be useful.

Here's one nice post / picture, as an example of what can be found there:

Tuesday, September 4, 2012


Here! The third applet on this page allows you to turn individual harmonics in the sound on and off:

The only problem with it that it doesn't allow you to choose amplitude of said harmonics, which makes them too loud. You can not simulate a flute or a clarinet for example. And you can not change the base frequency (not a hammond organ), which make make it a bit harder to demonstrate the blending of harmonics into one timbre. But at least it's something, and it is online.

And a bonus: an applet that lets you to compare different tunings (temperaments), such as equal, Pythagorean, just, as well as several historical ones.

Thursday, August 30, 2012

Ideas for a lecture on hearing and music

This year my lecture on hearing and music was quite confusing, which is really a shame, because the topic is so wonderful, interesting, and potentially rich with insights! (Everybody likes music - at least some kind of music - it's a general trait, and so I hope it can be used as a "hook" in teaching).

So now I'm looking for ways to make this lecture next year a hit. So far I am thinking of the following:

1) Instead of just showing formants and spectra in the presentation download a realtime spectrogram tool (like this one for example) and use it to show:

  • whistle, plunger flute (idea of a spectrum)
  • s-sch-sh sounds (noise and the spectrum)
  • voice (with harmonics)
  • throat singing rendition (to show that individual harmonics can be boosted / damped)
  • a-e-i-o-u, ba-da (standard formants)
  • difference between a flute and a clarinet (even vs odd harmonics)
2) More on formants. Trumpet mouthpiece with a "hand-trumpet" to show that "speech-like sounds" may be generated by means other than the mouth cavity. Also this video.

3) Auditory illusions and examples from this awesome site:

4) The easiest way to explain harmonic series is actually to use a piano, pressing some keys silently, and then exciting them by striking other keys corresponding to respective undertones. But I won't be able to bring a piano in a classroom, so either I'll try to use a re-tuned guitar, or just show them part of these amazing videos:

What else? When talking about frequencies, it may be useful to demonstrate them on the spot with something like this theremin simulator. Also with 2 plunger flutes I can show just tuning (disappearance of amplitude ripples), but I'm not sure they'll be able to hear them easily without prior training. Also I will probably bring an overtone flute (kalyuka) to demonstrate first several harmonics in a woodwind instrument.

Ideally it would be great to make some kind of a mini-lab, making them generate some sounds on their laptops, but I don't yet know what to do. Maybe making them run certain frequencies with certain pulse, in a hope that it would all mix into some kind of music? Giant interactive musical box? I should think more about that.

Tuesday, August 28, 2012

Excitation / Inhibition balance

A slide about excitation / inhibition balance. I did not invent it, but had to re-drew it from scratch. Maybe somebody will find it useful =)

(Some keywords for the web-crawlers: inhibition, excitation, balance, GABA, glutamate, death, coma, sleep, arousal, epilepsy, seizure, normal state).

Friday, August 24, 2012

Insane in the Chromatophores

Well, this is probably the best neuroscience video of the year =) Guys from the Backyardbrains went to the Woodshole Marine Laboratory, and there connected to a motor nerve in a squid. And squids can change color, you know. So what would happen if you now play some music into the nerve? Well, check it out!

This is a modification of their original demonstration with a cockroach leg:

And is reminiscent of this dancing hair cell video:

But this time it is also beautiful. Not just freaky, but also really beautiful!

Wednesday, August 22, 2012


Every now and then on Reddit I try to put my 2 cents in discussions on philosophy. Obviously I don't try to enter any philosophical discussion out there, but only those where I feel that some neuroscience, quantitative psychology, or evolutionary thinking may be helpful. I also always try to make the background of my thought  clear, to prevent possible misunderstanding.

And the result is usually the same: my comments are heavily downvoted. The discussion usually takes some highly theoretical direction, with lots of special words and names I never heard about. And at the same time these discussions are swarming with statements that are just proven to be false! Or, more often, with concepts that do not seem to correlate with anything in the scientific discourse for last 20-30 years already. Be it about subconsciousness, decision making, animal cognition, brain development, or game theory.

So my impression so far is that there exists a whole stratum of highly educated people who live in some artificial world, lagging behind the science, as we know it, and maybe even deliberately distancing themselves from its development. It is not that my comments there are especially nice and easy to read of course, but still the contrast between neighboring discussions of science and philosophy is really striking. Especially if you consider something like /r/AskAcademia , where humanities and sciences technically share the space, but at the same time self-select to some extent within each particular post, depending on its topic.

It is all pretty sad overall.

Friday, August 3, 2012

Brain evolution tree

Here I recolored the original image from the Internet, so that the colors matched (more or less) those from the Bear-Connors-Paradiso book. This way it is more consistent, and can be used as a part of BCP-based lecture.

Thursday, August 2, 2012

Nice recommendation on poster layout

I have stumbled upon a nice blog about poster design:

While most posters they discuss in the blog are bad both before and after the changes are made (probably just because they are hopelessly bad), I really like some pieces of advice they give on the blog. Like this one for example. Poster design that promotes results and demotes boring sections:

Saturday, July 21, 2012

Why the hippocampus is called this way

One thing that I keep hearing is that the hippocampus is called that way because, if dissected from the brain, it kind of resembles a Seahorse fish (genus "Hippocampus"). This statement is made by Wikipedia, for example. But the thing is - it does not really resemble it!

I mean, you can pretend they have something in common, and under a certain angle the hippocampus is indeed kind of bent, but you need to try really hard in order to "see" a seahorse in this structure. That's why when people say that, they usually chuckle a bit, and make a comment that "the morphologists in the previous centuries did probably have some good imagination".

At the same time, if you find the hippocampus at the coronal section of human brain, then, together with adjacent subiculum and entorhinal area, it does really look seahorse-shaped!

The difference here is that while the whole structure can be "bent" in a shape of the seahorse fish, no sane person (in my opinion at least) would try to describe it in "seahorse shape" terms, if not really prompted to do so. It's an "embrio-like thingy", or anything on earth, but the fish shape is certainly not the first thing to come on mind. While the section, with this gentle curve, does definitely look like a seahorse, with a poach and everything.

So my proposal is: to stop pretending it's about the whole structure. It's about the section. The morphologists of the past were quite OK; they weren't hallucinating.