Tomi Ollila <tomi.ollila@iki.fi> writes: > On Sun, Apr 28 2019, David Bremner wrote: > >> Rob Browning isolated a bug where files of exactly 4096 bytes generate >> errors because of a zero byte read. > > This happens to be effective test in case where FILE buffering uses 4096 > byte buffers. If it used any other size then this is null test. So this > test expects implementation detail of stdio (so IMO testing this is waste > of time). It's nice to show the next patch actually fixes something. > What could be useful, is that we just happen to have 4096 byte file in > our test corpus, and some tests which test other functionality could > accidentally fail due to this bug. We could also have 1024, 2048, > 8192, 16384 bytes (we still cannot say how large buffers any stdio > buffering uses but... :D) One problem is that adding new messages to our standard corpus is a messy operation, and I'm trying to keep the source diff small here. > > And if such messages were generated, instead of add_message those could > be written "verbatim" from script... so that the file size is not > dependent how add_message creates it... > Sure, that might make sense, although the script starts to bloat a bit for the 16k messages. OTOH, I'm fine with replacing the add_message with a literal 4k message for now. I guess we could actually test messages of size BUFSIZ. That isn't guaranteed to be the actual buffer size, but it seems a better bet than 4096. _______________________________________________ notmuch mailing list notmuch@notmuchmail.org https://notmuchmail.org/mailman/listinfo/notmuch