I'm playing around with Racket and missed a byte-string comprehension. When I found for/fold/derived
with examples in the documentation, I decided to roll my own byte-string comprehension macro, as any beginner would:
(define-syntax (for/bytes stx)
(syntax-case stx ()
((_ clauses . defs+exprs)
(with-syntax ((original stx))
#'(let-values
(((bstr i max-length)
(for/fold/derived original ((bstr (make-bytes 16)) (c 0) (ln-incr 32)) clauses
(define el (let () . defs+exprs))
(let-values (((new-bstr new-ln-incr)
(if (eq? c (bytes-length bstr))
(values (bytes-append bstr (make-bytes ln-incr)) (* ln-incr 2))
(values bstr ln-incr))))
(bytes-set! new-bstr c el)
(values new-bstr (+ c 1) new-ln-incr)))))
(subbytes bstr 0 i))))))
I've got a few related questions:
for/fold/derived
documentation with a macro-expaned for/vector
Sadly, it's not really faster than (list->bytes (for/list ...
This micro-benchmark:
(define size 50000)
(define (custom-byte-test) (for/bytes ((i (in-range size))) (modulo i 256)))
(define (standard-list-test) (list->bytes (for/list ((i (in-range size))) (modulo i 256))))
(profile-thunk custom-byte-test #:repeat 1000)
(profile-thunk standard-list-test #:repeat 1000)
gives 3212ms vs 3690ms. For sizes much smaller than 50000 my for/bytes
loses, for sizes bigger than that it wins.
My answers:
Is this the Racket way anyhow?
Yes.
Is the macro ok? Basically I combined the examples from the for/fold/derived documentation with a macro-expand for/vector
Yes, I think it looks good.
Are there any obvious performance optimizations? Sadly, it's not really faster than
(list->bytes (for/list ...
I'm not aware of how to do it faster. The "win" here is that the complexity of buffer resizing is hidden from users of for/bytes
.