Mar 252007

I will quickly roll up some of the interesting bits that happened around Beast in the last couple of weeks.
Stefan Westerfeld sat down and put up a collection of the various instruments and loops he is producing with Beast. Nicely described BSE files along with Ogg previews are available in his music collection: STW Music Archive.

Hanno Behrens had been fairly active in pushing the Beast project before the last release. In particular he has been stirring up the mailing list with feature requests. One of the things we managed to get in in response to these efforts is a list of commonly used musical tuning systems. Besides the already supported 12-TET which is the 12 note per octave equal temperament used in all western contemporary music, this includes further TET variants, Indian tuning, pentatonic tunings, meantone tunings and various well tempered tunings mostly intended for organs.

Hanno also published a very well written C64-retrospective article about Beast synthesis in the 20th Issue (german) of the Lotek64 Magazine. And additionally, the technical backgrounds for this article are described in the Beast wiki: SID Vicious (english).

Dec 272006

It really has been a long way to get this release out of the door. It fixes some serious crashers that i intend to blog about later, but more importantly it fixes a security vulnerability issue, here’s the related advisory: artswrapper vulnerability 2006-2916
Upgrading from any older Beast version is thusly strongly recommended. I’ll not get into too many boring details here, so let me just say that Beast now supports different musical tuning systems and also ships with new and extended modules (BseQuantizer). All the g(l)ory details can be found in the original announcement:

Oct 232006

There’s been quite some hacking going on in the Beast tree recently. Stefan Westerfeld kindly wrote up a new development summary which is published on the Beast front page.

In particular, we’ve been hacking on the unit tests and tried to get make check invocations run much faster. To paraphrase Michael C. Feathers from his very interesting book Working Effectively with Legacy Code on unit tests:

Unit tests should run fast – a test taking 1/10th of a second is a slow unit test.

Most tests we had executed during make check took much longer. Beast has some pretty sophisticated test features nowadays, i.e. it can render BSE files to WAV files offline (in a test harness), extract certain audio features from the WAV files and compare those against saved feature sets. In other places, we’re using tests that loop through all possible input/output values of a function in brute force manner and assert correctness over the full value range. Adding up to that, we have performance tests that may repeatedly call the same functions (often thousands or millions of times) in order to measure their performance and print out measurements.

These kind of tests are nice to have for broad correctness testing, especially around release time. However we did run into the problem of make check being less likely executed before commits, because running the tests would be too slow to bother with. That of course somewhat defeats the purpose of having a test harness. Another problem that we ran into were the intermixing of correctness/accuracy tests with performance benchmarks. These often sit in the same test program or even the same function and are hard to spot that way in the full output of a check run.

To solve the outlined problems, we changed the Beast tests as follows:

* All makefiles support the (recursive) rules: check, slowcheck, perf, report (this is easily implemented by including a common makefile).

* Tests added to TESTS are run as part of check (automake standard).

* Tests added to SLOWTESTS are run as part of slowcheck with --test-slow.

* Tests added to PERFTESTS are run as part of perf with --test-perf.

* make report runs all of check, slowcheck and perf and captures the output into a file report.out.

* We use special test initialization functions (e.g. sfi_init_test(argc,argv)) which do argument parsing to handle --test-slow and --test-perf.

* Performance measurements are always reported by the treport_maximized(perf_testname,amount,unit) function or the treport_minimized() variant thereof, depending on whether the measured quantity is desired to be maximized or minimized. These functions are defined in birnettests.h and print out quantities with a magic prefix that allows grepping for performance results.

* make distcheck enforces a successful run of make report.

Together, these changes have allowed us to easily tweak our tests to have faster test loops (if !test_slow) and to conditionalize lengthy performance loops (if test_perf). So make check is pleasingly fast now, while make slowcheck still runs all the brute force and lengthy tests we’ve come up with. Performance results are now available at the tip of:

	$ make report
	$ grep '^#TBENCH=' report.out
	#TBENCH=mini:         Direct-AutoLocker:      +83.57            nSeconds
	#TBENCH=mini:         Birnet-AutoLocker:     +104.574           nSeconds
	#TBENCH=maxi:  CPU Resampling FPU-Up08M:     +260.4562325006    Streams
	#TBENCH=maxi:  CPU Resampling FPU-Up16M:     +184.19598452754   Streams
	#TBENCH=maxi:  CPU Resampling SSE-Up08M:     +399.04229848364   Streams
	#TBENCH=maxi:  CPU Resampling SSE-Up16M:     +338.5240352065    Streams 

The results are tailored to be parsable by performance statistics scripts. So writing scripts to present performance report differences and to compare performance reports between releases is now on the TODO list. 😉

Jul 162006

After next Friday, Stefan Westerfeld and me are leaving for a 3 week vacation. So in order to not let the Beast release slip even more, we’ve been working hard over the last couple weeks to get a new tarball out of the door. After much fiddling and an SVN-migration hick up in the last minute (Beast is in SVN now and will stay there), it’s finally accomplished:

Beast-0.7.0 NEWS

We really hope people are going to have fun with this release and report all bugs they encounter in Bugzilla.

May 162006

Faced with the documentation system requirements outlined in the last episode (The Beast Documentation Quest (2)) it took a while to figure that there really was no single documentation tool fulfilling all the needs outlined. So writing a new documentation tool for the Beast project looked like the only viable solution. I wasn’t ready to start out rolling a new full fledged C++ parser though…
Fortunately, Doxygen includes an XML generation back-end, that will dump all the C/C++ structure after the parsing stage, including comments, into a set of XML files. Unfortunately it unconditionally preprocesses the source code comments. So it required some trickery to circumvent complaints about unknown or invalidly used Doxygen macros when combined with a newly developed macro processor…
To get the macro processor going, i wrote a recursive descend parser for “@macro{arguments} more text\n“-style constructs in Python. Added up a syntax facility for new macro definitions, an algorithm to detect and generate sections plus a rudimentary HTML generation back-end, and voila – a new documentation generator was born: Doxer. That was around last summer.

In response to my last blog entry on this topic, one commenter wrote:

I think that starting another documentation project would be totally useless.

I don’t think i can agree here. For any software solvable problem you have, you can only write your own tool if you don’t find an existing one that covers your needs or could be extended to do so. Admittedly, i didn’t know about Synopsis back then, and i intend to investigate more of its innards at some point to figure whether it can be integrated with Doxer nicely. Until then, Beast does again have a working documentation tool, re-enabling us to keep documenting the source code and updating the website.
At this point, the CVS version of Beast has been fully migrated to generate and display HTML runtime documentation, using a browser according to the users preferences. Also all source code documentation was converted and the website is fully generated from Doxer. All of this is driven by circa 6k lines of code, made possible only by the expressiveness of Python and by using Doxygen as a source code parsing backend.

Feb 242006

In order to look for a new documentation tool for Beast, i set out a list of requirements that had to be matched:

a) We need to generate reference documentation from C and C++ source code comments.
b) Since we also need to process ordinary documentation files (say an FAQ) that involve lots of text and occasionally graphics, Tex-info syntax is to be preferred over XML-style markup (well – actually *anything* is preferable over XML-style markup for lots of human digestible text) if at all possible.
c) We need to generate documentation from introspection tools (e.g. bseautodoc.c spews out documentation for object properties, signals and channels).
d) The source code comments should use the same markup style that is used for ordinary documentation and design documents (e.g. the Beast FAQ or the Beast Quick Start Guide).
e) We need to be able to generate decently looking HTML for the Beast website.
f) We need to be able to generate manual pages – nothing too sophisticated, covering roughly the feature set of man(7) on Linux is more than sufficient.
g) We need a runtime documentation format with support for “live documentation”, this was one of the reasons to go with GtkTextView in the original documentation. Basically “live” here means, runtime documentation displayed by a running Beast instance, e.g. triggered from the “Help” menu, should be able to provide links/buttons/activators that can trigger the running Beast instance to play back documentation example songs.
h) Taking (b) to the extreme, writing documentation should be as easy as possible, maybe even as easy as writing documentation with OpenOffice Writer and exporting it as HTML. Of course, this would conflict with (g)… or – would it?
i) Automatic external cross-linking should be supported, e.g. to all the documentation sections offered at Especially within the reference documentation, automatic link markup would be good to have, without authors having to dig up links or add markup to the relevant keywords.
j) The tool-chain required to build documentation should be available from a reasonably standard system, such as the current Debian stable.
k) Something similar to the custom macro definition feature of Tex-info as described in The Beast Documentation Quest (1) should be offered by the new documentation system as well if at all possible.

The above requirements at hand, i looked around quite a bit to find a documentation tool that would allow me to fulfil all or most of them.
It turned out that (a) and (g) definitely were amongst the harder ones, while (i) was mostly solved already by the old documentation system with regards to link indexing. There, we used a Perl script to extract various kind of (source code identifier, tuples from This scrip even covered enum values, something that gtk-doc does not generate own index lists for.
One of the next things i was to find out, was that many tools don’t deliver HTML output as nice as that generated by gtk-doc. But then, even the very long-term plans for gtk-doc do not include extension towards C++ parsing…
In general, C++ parsing isn’t one of the tasks easily covered. As far as i could figure, there is no decent free C++ parser (including comments) usable for documentation purposes other than Doxygen. Luckily, Doxygen is a reasonably standard tool (j) and also supports ‘@’-style markup that looks Tex-info alike (b) and generates documentation in multiple output formats (amongst them HTML and manual pages). Unfortunately i could not get the Doxygen HTML output to look anywhere close as decent as gtk-doc generated HTML pages. That included attempts at sophisticated CSS modifications and even hacking it’s documentation generation abilities. At closer inspection, Doxygen also did not fully fit what i had in mind for a general purpose documentation tool to cover, i.e. (b), (c) and (k). It should be noted however that Doxygen supports link index imports in a Doxygen specific format that can be used to enable automatic external linking (i) with some effort.

Feb 222006

Around 2002, we (the beast project) started to put together the various bits and pieces for a makeinfo based documentation system. We ended up using texinfo markup because it’s not as hard to read and write for humans as XML is, and you can easily define your own macros to aid you in semantic markup.
E.g. if you wanted to markup a menu item path as such, you simply define @menu{} to mark things up as desired and then use it like:
Use @menu{/File/Save As...} to save a files with arbitrary names.
Documentation was needed mainly in three formats: HTML for the website, manual pages for the executables, and in a specific (.markup) format for a GMarkup-based parser that served as a front end to GtkTextBuffer text and tags. This .markup format is actually what Beast used (and currently still uses) as its primary documentation which can be displayed by GtkTextView widgets (.markup files contain tag definitions that define GtkTextTags with specific property settings and tag spans that apply these to text regions). So to generate documentation, we basically did (simplified for brevity):
1) write doc.texi with texinfo markup,
2) generate XML markup from doc.texi with makeinfo --xml,
3) generate the target formats with xsltproc [markup.xsl|html.xsl|man.xsl].
And for display, we did:
4) parse .markup file at runtime with GMarkupParser,
5) create GtkTextTags and adjust their properties from the parsed tag definitions,
6) create a GtktextBuffer to be displayed by a GtktextView, and apply the GtkTextTag spans.
On top of all this, we parsed our source files with a perl script to extract source code documentation, and generated .texi files from that. We used a perl script instead of using gtk-doc, because it was hard to use back then and because we wanted to use texinfo markup instead of SGML markup for our documentation (the escaping required by SGML/XML really sucks for source code sequences).
So far so good.
But then, roughly 2 years ago, we started to get serious problems with our documentation system, basically due to instabilities of the --xml mode of makeinfo. Other annoyances were that we couldn’t document our newly added C++ code, and that GtkTextView/GtkTextBuffer, albeit having a rich set of markup facilities, has no support for <table/> markup in any form (and probably never will). That, and a bit of maintenance lack in our .xsl files led me to look around for suitable alternatives…