On Wed, 8 Dec 2010 16:58:44 -0500, Austin Clements <amdragon@MIT.EDU> wrote: > Now that this is in (and I have a temporary respite from TA duties), > I'm going to finish up and send out my other ~1.7X improvement, just > to get it out of my queue. Then I'll look at making a performance > regression suite. Were you thinking of some standard set of timed > operations wrapped in a little script that can tell you if you've made > things worse, or something more elaborate? I recently started making a perf/notmuch-perf script for notmuch (see below). I was doing this in preparation for my linux.conf.au talk on notmuch, (though I ended up not talking about performance in concrete terms). I don't know how much further I'll run with this now, but if this is a useful starting place for anyone, let me know and I can obviously add this to the repository. So the idea with this script is that the timed operations actually depend on local data, (your current mail collection as indicated by NOTMUCH_CONFIG). So the operations aren't standardized to enable comparison between different people, (unless they also agree on some common mail collection). My script as attached runs only "notmuch new" to time the original indexing. Beyond that I'd like to time some common operations, (adding a new message, searching for a single message, searching for many messages, searching for all messages, etc.). And then on top of this, I'd like to have a little utility that could compare several different runs captured previously. That would let me do the regression testing I'd like to ensure we never make performance worse. Please feel free to run with this or with your own approach as you see fit. -Carl