On Sun, Mar 25 2018, David Bremner wrote: > Here's one approach. A given pytest "file" can be embedded in a normal > (for us) test script. As I write this, it occurs to me you might be > thinking of embedding unit tests in the bindings source files; that > would be easy to add, something along the lines of > > test_begin_subtest "python bindings embedded unit tests" > test_expect_success "${NOTMUCH_PYTEST} ${NOTMUCH_SRCDIR}/bindings/python/notmuch" Hmm. Looks a bit strange to embed the pytest snippets into shell script and then execute each of these individiually. The only thing py.test seems to do here is "visualizing" assert output. We could just use normal python otherwise, and just not (necessarily) drop things into functions. If we had pytest, I'd suggest the tests were written and executed separately (from one test script) and then results collected somehow to the final aggregator. (Recently I've been working with py.test-3 (and am not too happy with it, perhaps it is better when testing python code, we use it for testing c code with help of ctypes...) so I have some knowledge of how it works... BTW: in case of pytest. be careful there is no `conftest.py` in any of the directories starting / !!!, to mess up with your tests >;/ > You could also run one source file of tests with > > test_begin_subtest "python bindings foo tests" > test_expect_success "${NOTMUCH_PYTEST} ${NOTMUCH_SRCDIR}/bindings/python/notmuch/test_foo.py" notmuch bätter... ;/ > > that would give a less granular result, at the cost of more boilerplate Tomi _______________________________________________ notmuch mailing list notmuch@notmuchmail.org https://notmuchmail.org/mailman/listinfo/notmuch