Jun 272017
Beast + Electron

We have just released Beast version 0.11.0: Beast 0.11.0 Announcement

The announcement gives a high level overview of the changes (Soundfont support, multi threaded signal processing, new packaging, etc) and links to all the details like NEWS, tarballs, the binary package and shortlogs.

In this post, I’d like to extend a bit on where we’re going next. Beast has come a long way from its first lines of code drafted in 1996 and has seen long periods of inactivity due to numerous personal reasons on my part and also Stefan’s. I can’t begin to describe how much the Beast project owes to Stefan’s involvement, development really thrives whenever he manages to put some weight behind the project. He’s initiated major shifts in the project, contributed lots of demos, instruments and of course code.

Lately we were able to devote some time to Beast again, and with that reformulated its future directions. One important change was packaging, which already made it into the 0.11.0 release. This allows us to provide an easily installable binary package that extracts into /opt/. It’s available as a DEB for now, and we hope other package formats will follow.

Another major area of change that I’m working on behind the scenes is the UI technology. The current UI has huge deficits and lacks in workflow optimizations compared to other DAWs. Stefan has several big improvements planned for the workflow as do I, but in the past Gtk+ has not been helping with making those changes easy. Rapicorn was one attempt at fixing that, and while in theory it can provide a lot more flexibility in shaping the UI, based on concise declarations and use of SVG elements, it is still far away from reaching the degree of flexibility needed for our plans.

So far indeed, that I’ve had to seriously reconsider the approach and look for alternatives. Incidentally, the vast majority of feature needs and ideas I’ve had for the toolkit area appear to already be readily accessible through web technologies that have impressively advanced in the last few years.

Though we’re not planning to move Beast into an online application, we can still leverage these technologies through the electron project, which is an open source project providing HTML & CSS rendering plus Javascript on the desktop using libchromiumcontent from Google Chrome.

In my eyes it makes little sense to replicate much of the W3C specified features in desktop toolkits like Gtk+, Qt, Rapicorn which are much lesser staffed than the major browser projects, especially if we have a way to utilize recent browser improvements on the desktop.

So in effect I’ve changed plans for Beast’s future UI technology and started to construct a new interface based on web technologies running in electron. It’s interesting to change desktop UI development like this to say the least, and I’m curious about how long it takes to get up to par with current Gtk+ Beast functionality. I have some ideas how to address real time display of volume and frequency meters, but I’m still unsure how to best tackle large track view / clip view displays with wide scrolling and zooming ranges, given the choice between DOM elements and an HTML5 canvas.

Apart from the UI, we have several sound Library improvements pending integration. Stefan wants to finally complete Jack driver support, and as always there are some interesting plugin implementations in the queue that are awaiting completion.

If you want to help with any of the development steps outlined or just track Beast’s evolution, you can join our mailing list. Although the occasional face to face meeting helps us with setting development directions, we’re doing our best with keeping everything documented and open for discussions on the list.

UPDATE: Stefan just released his first Beast screencast: Walkthrough: making music with BEAST 0.11.0

Dec 272016


The last update has been a while, so with the new year around the corner and sitting in c-base @ 33c3, I’ll do my best to sum up what’s been going on in Rapicorn and Beast development since the last releases.

Now both projects make use of extended instruction sets (SIMD) that have been present in CPUs for the last 8 – 10 years, such as MMX, SSE, SSE2, SSE3 and CMPXCHG16B. Also both projects now support easy test builds in Docker images, which makes automated testing for different Linux distributions from travis-ci much simpler and more reproducible. Along the way, both got finally fixed up to fully support clang++ builds, although clang++ still throws a number of warnings. This means we can use clang++ based development and debugging tools now! A lot of old code that became obsolete or always remained experimental could be removed (and still is being removed).

Beast got support for using multiple CPU cores in its synthesis engine, we are currently testing performance improvements and stability of this addition. Rapicorn gained some extra logic to allow main loop integration with a GMainContext, which allows Beast to execute a Gtk+ and a Rapicorn event loop in the same thread.

Rapicorn widgets now always store coordinates relative to their parents, and always buffer drawings in per-widget surfaces. This allowed major optimizations to the size negotiation process so renegotiations can now operate much more fine grained. The widget states also got an overhaul and XML nodes now use declare=”…” attributes when new widgets are composed. Due to some rendering changes, librsvg modifications could be obsoleted, so librapicorn now links against a preinstalled librsvg. RadioButton, ToggleButton, SelectableItem and new painter widgets got added, as well as a few convenience properties.

After setting up an experimental Rapicorn build with Meson, we got some new ideas to speed up and improve the autotools based builds. I.e. I managed to do a full Rapicorn build with meson and compare that to autotools + GNU Make. It turns out Meson had two significant speed advantages:

  1. Meson builds files from multiple directories in parallel;
  2. Meson configuration happens a lot faster than what the autoconf scripts do.

Meson also has/had a lot of quirks (examples #785, #786, #753) and wasn’t really easier to use than our GNU Make setup. At least for me – given that I know GNU Make very well. The number one advantage of Meson was overcome with migrating Rapicorn to use a non-recursive Makefile (I find dependencies can still be expressed much better in Make than Meson), since parallel GNU Make can be just as fast as Ninja for small to medium sized projects.

The number two issue is harder to beat though. Looking at our configure.ac file, there where a lot of shell and compiler invocations I could remove, simply by taking the same shortcuts that Meson does, e.g. detect clang or gcc and then devise a batch of compiler flags instead of testing compiler support for each flag individually. Executing ./configure takes ca 3 seconds now, which isn’t too bad for infrequent invocations. The real culprit is autoreconf though, which takes over 12 seconds to regenerate everything after a configure.ac or related change (briefly looking into that, it seems aclocal takes longer than all of autoconf, automake, autoheader and libtoolize together).


PS: I’m attending 33C3 in Hamburg atm, so drop me a line (email or twitter) if you’re around and like to chat over coffee.

Jul 022015

Rapicorn 'visitor' branch

Trying to keep it up, here’s an update on recent developments in Rapicorn and Beast.

Git Branches

For now, Rapicorn and Beast are using Git branches the following way:

  • Topic branches are created for each change. Where possible, commits should compile and pass all tests (i.e. pass make check installcheck).
  • Once completed, topic branches are merged into the master branch. For intermediate merges of huge branches, I’ve recently been adding [ongoing] to the merge commit message. As an aside, branch merges should probably be more elaborate in the future to make devlog articles easier to write and potentially more accurate.
  • The master branch must always compile and pass all tests.
  • OpenHub: The OpenHub repo links have been adjusted to point at Rapicorn’s and Beast’s master branch. Because of problems with spammers and a corresponding reimplementations, code statistic updates on the OpenHub are platform currently stalled however.

Hello and goodbye clang++

Rapicorn C++11 code currently compiles with g++-4.7 and upwards. An initial attempt was made at making the C++11 code compile with clang++-3.4 but the incompatibilities are currently too numerous. A few good fixes have come out of this and are merged into master now, but further work on this branch probably has to wait for a newer clang++ version.

New Widgets

Rapicorn is growing more widgets that implement state rendering via SVG element matching. Recent additions are:

  • LayerPainter – A container that allows rendering widgets on top of each other.
  • ElementPainter – A container that displays state dependent SVG image elements.
  • FocusPainter – An ElementPainter that decorates its child according to focus changes.

IDL Improvements

Several changes around Rapicorn’s IDL compiler and support code made it into master recently:

  • The IDL layer got bind() and connect() mthods (on the ObjectBroker interface). This models the IDL setup phase after the zeromq API. Beast makes use of this when setting up IDL interface layers in the UI and in BSE.
  • The Python binding was rewritten using Cython. Instead of invoking a heap of generated Python glue code and talking to the message passing interfaces directly, the Python binding now sits on top of the C++ binding. This makes the end results operate much faster, is less complex on the maintenance side and more functional with regards to the Python API offered. As an added bonus, it also eases testing of the C++ bindings.
  • And just to prove the previous point, the new Cython port uncovered a major issue lurking in the C++ IDL handling of objects in records and sequences. At least since the introduction of remote reference counting, client side object handles and server side object references are implemented and treated in fundamentally different ways. This requires records (struct) and sequences (std::vector) to have separate implementation types on the client and server sides. Thus, the client and server types are now prefixed with ClnT_ and SrvT_ respectively. Newly generated typedef aliases are hiding the prefixes from user code.
  • IDL files don’t need ‘ = 0 ‘ postfixes for methods any more. After all, generating non-virtual methods wasn’t really used anyway.
  • The Enum introspection facilities got rewritten so things like the Enum name are also accessible now. This area probably isn’t fully finished yet, for future Any integration a more versatile API is needed still.
  • Auxillary information for properties is now accessible through an __aida_aux_data__() method on generated interfaces.
  • Generated records now provide a template method __accept__<>(Visitor) to visit all record fields by value reference and name string. Exemplary visitor implementations are provided to serialize/deserialize records to XML and INI file formats.

BEAST Developments

For the most part, changes in Beast are driving or chasing Rapicorn at the moment. This means that often the tip of Rapicorn master is required to build Beast’s master branch. Here is why:

  • Beast now uses RAPIDRES(1) to embedd compressed files. Rapicorn::Blob and Rapicorn::Res make these accessible.
  • Beast now makes use of Rapicorn’s IDL compiler to generate beastrc config structures and to add a new ‘Bse‘ IDL layer into libbse that allows the UI code to interface with Bse objects via C++ interfaces. Of course, lots of additional porting work is needed to complete this.
  • Beast procedures (a kind of ‘remote method’ implemented in C with lots of boilerplate code) are now migrated to C++ methods one by one which majorly simplifies the code base, but also causes lots of laborious adaptions on the call sites, the UI and the undo system. An excursion into the changes this brings for the undo implementation is provided in DevLog: A day of templates.
  • The GParamSpec introspection objects for properties that Beast uses for GUI generation can now be constructed from __aida_aux_data__()  strings, which enabled the beastrc config structure migration.
  • An explanatory file HACKING.md was added which describes the ongoing migration efforts and provides help in accessing the object types involved.

What’s next?

For the moment, porting the object system in Beast from GObject to IDL based C++11 interfaces and related procedure, signal and property migrations is keeping me more than busy. I’ll try to focus on completing the majority of work in this area first. But for outlooks, adding a Python REPL might make a good followup step. 😉

May 052015

Giving in to persistent nagging from Stephen and Stefan about progress updates (thanks guys), I’ll cherry pick some of the branches recently merged into Rapicorn devel for this post. We’ll see if I can keep posting updates more regularly in the future… 😉

Interactive Examples

Following an idea Pippin showed me for his FOSDEM talk, I’ve implemented a very simple small script (merged with the ‘interactive-examples’ branch) to restart an example program if any file of a directory hierarchy changes. This allows “live” demonstrations of widget tree modifications in source code, e.g.:

cd rapicorn/
misc/interactive.sh python ./docs/tutorial/tuthello.py &
emacs ./docs/tutorial/tuthello.py
# modify and save tuthello.py

Everytime a modification is saved, tuthello.py is restarted, so the test window it displays “appears” to update itself.

Shared_ptr widgets

Last weekend, I also pushed the make_shared_widgets branch to Rapicorn.

Some while ago, we started to use std::shared_ptr<> to maintain widget reference counts instead of the hand-crafted ref/unref functions that used atomic operations. After several cleanups, we can now also use std::make_shared() to allocate the same memory block for storing the reference count and widget data. Here is an image (originals by Herb Sutter) demonstrating it:


The hand-optimized atomic operations we used previously had some speed advantages, but using shared_ptr was needed to properly implement remote reference counting.


Since 2003 or so, Beast and later Rapicorn have had the ability to turn any resource file, e.g. PNG icons, into a stream of C char data to be compiled into a program data section for runtime access. The process was rather unordered and adhoc though, i.e. any source file could include char data generated that way, but each case needed its own make rules and support code to access/uncompress and use that data. Lately I did a survey across other projects on how they go about integrating resource files and simplified matters in Rapicorn based on the inspirations I got.
With the merge of the ‘Res’ branch, resource files like icons and XML files have now all been moved under the res/ directory. All files under this subdir are automatically compressed and compiled into the Rapicorn shared library and are accessible through the ‘Res’ resource class. Example:

Blob data = Res ("@res icons/example.png");

Blob objects can be constructed from resources or memory mapped files, they provide size() and data() methods and are automatically memory managed.

New eval syntax

In the recently merged ‘factory-eval-syntax’ branch, we’ve changed the expression evaluation syntax for UI XML files to the following:

<label markup-text="@eval label_variable"></label>

Starting attribute values with ‘@’ has precedence on other platforms and is also useful in other contexts like resources, which allows us to reduce the number of syntax special cases for XML notations.

Additionally, the XML files now support property element syntax, e.g. to set the ‘markup_text’ property of a Label:

    <Label.markup-text> Multiline <b>Text</b>... </Label.markup-text>

This markup is much more natural for complex property values and also has precedence on other platforms.

What’s next

I’m currently knee deep in the guts of new theming code, the majority of which has just started to work but some important bits still need finishing. This also brings some interesting renovation of widget states, which I hope to cover here soon. As always, the Rapicorn Task List contains the most important things to be worked on next. Feedback on missing tasks or opinions on what to prioritize are always appreciated.

Dec 122014
Caption Text

Miller, Gary – Wikimedia Commons

In the last months I finally completed and merged a long standing debt into Rapicorn. Ever since the Rapicorn GUI layout & rendering thread got separated from the main application (user) thread, referencing widgets (from the application via the C++ binding or the Python binding) worked mostly due to luck.

I investigated and researched several remote reference counting and distributed garbage collection schemes and many kudos go to Stefan Westerfeld for being able to bounce ideas off of him over time. In the end, the best solution for Rapicorn makes use of several unique features in its remote communication layer:

  1. Only the client (user) thread will ever make two way calls into the server (GUI) thread, i.e. send a function call message and block for a result.
  2. All objects are known to live in the server thread only.
  3. Remote messages/calls are strictly sequenced between the threads, i.e. messages will be delivered and processed sequentially and in order of arrival.

This allows the following scheme:

  1. Any object reference that gets passed from the server (GUI) thread into the client (user) thread enters a server-side reference-table to keep the object alive. I.e. the server thread assumes that clients automatically “ref” new objects that pass the thread boundary.
  2. For any object reference that’s received by a client thread, uses are counted separately on the client-side and once the first object becomes unused, a special message is sent back to the server thread (SEEN_GARBAGE).
  3. At any point after receiving SEEN_GARBAGE, the server thread may opt to initiate the collection of remote object references. The current code has no artificial delays built in and does so immediately (thresholds for delays may be added in the future).
  4. To collect references, the serer thread swaps its reference-table for an empty one and sends out a GARBAGE_SWEEP command.
  5. Upon receiving GARBAGE_SWEEP, the client thread creates a list of all object references it received in the past and for which the client-side use count has dropped to zero. These objects are removed from the client’s internal bookkeeping and the list is sent back to the server as GARBAGE_REPORT.
  6. Upon receiving a GARBAGE_REPORT, corresponding to a previous GARBAGE_SWEEP command, the sever thread has an exact list of references to purge from its previously detached reference-table. Remaining references are merged into currently active table (the one that started empty upon GARBAGE_SWEEP initiation). That way, all object references that have been sent to the client thread but are now unused are discarded, unless they have meanwhile been added into the newly active reference-table.

So far, the scheme works really well. Swapping out the server side reference-tables copes properly with the most tricky case: A (new) object reference traveling from the server to the client (e.g. as part of a get_object() call result), while the client is about to report this very reference as unused in a GARBAGE_REPORT. Such an object reference will be received by the client after its garbage report creation and treated as a genuinely new object reference arriving, similarly to the result of a create_object() call. On the server side it is simply added into the new reference table, so it’ll survive the server receiving the garbage report and subsequent garbage disposal.

The only thing left was figuring out how to automatically test that an object is collected/unreferenced, i.e. write code that checks that objects are gone…
Since we moved to std::shared_ptr for widget reference counting and often use std::make_shared(), there isn’t really any way too generically hook into the last unref of an object to install test code. The best effort test code I came up with can be found in testgc.py. It enables GC layer debug messages, triggers remote object creation + release and then checks the debugging output for corresponding collection messages. Example:

TestGC-Create 100 widgets... TestGC-Release 100 widgets...
GCStats: ClientConnectionImpl: SEEN_GARBAGE (aaaa000400000004)
GCStats: ServerConnectionImpl: GARBAGE_SWEEP: 103 candidates
GCStats: ClientConnectionImpl: GARBAGE_REPORT: 100 trash ids
GCStats: ServerConnectionImpl: GARBAGE_COLLECTED: \
  considered=103 retained=3 purged=100 active=3

The protocol is fairly efficient in saving bandwitdh and task switches: ref-messages are implicit (never sent), unref-messages are sent only once and support batching (GARBAGE_REPORT). What’s left is the two tiny messages that initiate garbage collection (SEEN_GARBAGE) and synchronize reference counting (GARBAGE_SWEEP). As I hinted earlier, in the future we can introduce arbitrary delays between the two to reduce overhead and increase batching, if that ever becomes necessary.

While its not the most user visible functionality implemented in Rapicorn, it presents an important milestone for reliable toolkit operation and a fundament for future developments like remote calls across process or machine boundaries.

Merge 14.10.0 – Includes remote refernce counting,

Merge shared_ptr-reference-counting  – post-release.

Jun 162008

The XML GUI definition files used in Rapicorn and also in Beast (described briefly in an earlier blog post) supported a simple $(function,arguments...) evaluation syntax, similar to GNU Make. I’ve never been very happy with this syntax, but it was fairly easy to implement at the start and followed naturally from early $VARIABLE expansion features. At some point last year I considered various alternative syntax variants and discussed the ideas with Stefan Westerfeld. Over the course of the last two months, I finally got around to implement them.

I’ve never grown familiar with reverse polish notation, and although Guile is the canonical scripting language for Beast, I’ve always found myself very inefficient with expressing my thoughts in lisp expressions. So the new syntax definitely had to support infix expressions – despite the more complex parsing logic required to parse them. Bison already ships with an example calculator that parses infix expressions, so that’s a quick start as far as the syntax rules go. Integration is quite a different story though, more on that later.

Since in Rapicorn the expressions are used to compute widget property values, they are likely to be executed very often, i.e. each time a widget is created from an XML file description. Consequently, I wanted to use a pre-parsed AST to carry out the evaluation and avoid mixing evaluation logic with parser logic, which would have forced reparsing expressions upon each evaluation. At first I quickly threw together some C++ classes for the arithmetic operators and used those as nodes for the AST, similar to:

  class ASTNodeNot : virtual public ASTNode {
    ASTNode &m_operand;
    explicit ASTNodeNot (ASTNode &operand) :
      m_operand (a)
    virtual Value
    eval (Env *env) const
      Value a = m_operand.eval();
      return Value (!a.asbool());

The supported syntax is quickly summarized:

  Operators: ( + - * / ** < > <= >= == != or and not )
  Functions: name ( args... )
  Inputs:    FloatingPoint 'SingleQuotedString' "DoubleQuotedString"

In this notation, FloatingPoint includes hexadecimal numbers and of course integers and the quoted strings support C style escape sequences like octal numbers, ‘\n‘, ‘\t‘ and so on. The functions can be implemented by the parser API user, but a good set of standard arithmetic functions is already supported like ceil(), floor(), min(), max(), log(), etc. So it’s a very basic parser, but covers the vast majority of expressions needed to position widgets or configure packing properties.

One thing that turned out to be tricky is the binary operator semantics for strings. At the very least, I wanted support for "string" + "string" and "string" == "string". Since both operators are supported for numbers as well, the exact behavior of "string" + FloatingPoint and "string" == FloatingPoint had to be defined. I managed to find a few programming language precedents here in Perl, Python, and ECMAScript (Javascript). They of course all handle the cases differently. In the end I settled with ECMAScript semantics:

  Value1 == Value2  # does string comparisons if both values are strings
  Value1 + Value2   # does string conversion if either value is a string

Unit testing for the parser turned out to be particularly easy to implement. All that’s needed is a small utility that reads expressions and prints/verifies the evaluation results. Throwing in some additional shell code allowed a normal text file to drive unit testing. It simply contains expressions and expected results on alternating lines. Btw, libreadline can be really handy in cases like this. Using it takes a mere 5-10 lines of additional code to support a nice interactive command line interface including history for the evaluator test shell.

After some initial testing, the C++ AST node classes seemed like an awful lot of pointers, fragmentation and VTable calls for a supposedly straight forward expression evaluation. Also, adding the missing destruction semantics would have significantly increased the existing class logic. So I tried to come up with a leaner and foremost flat memory presentation. In the end, I settled with a single array that grows in 4 byte (integer) steps, embeds strings literally (padded to 4 byte alignment) and uses array offsets instead of pointers for references. A single multiplication operator is encoded with 3 integers that way: MUL_opcode factor1_index factor2_index. That’s essentially 12 bytes per binary operator, still significantly more than the parser input, but also significantly smaller than allocating an equivalent C++ class. Evaluation of the expression stored in the array doesn’t need any VTable calls, and a single straight forward evaluation function can be used, that implements the different operators as switch statement cases. Also release semantics are persuasively trivial for a single consecutive array.

What’s left was to figure a way to embed expression evaluation in XML values or text blocks. Previously, $(expression) was substituted everywhere, but with the new syntax, variables (or rather constants defined within the Rapicorn core or via the ‘‘ syntax supported by Rapicorn XML files) didn’t use a $-prefix anymore. So sticking with $() seemed to make little sense. As it’s implemented now, backticks are used to cause expression evaluation, with the empty expression evaluating to a single backtick:

  XML Value/Text         Parser Result
    Foo  5 + 5      ->     Foo  5 + 5
    Foo `5 + 5`     ->     Foo 10
    ``Foo``         ->     `Foo`

We will see how useful the current expression style turns out to be. I don’t consider every implementation bit outlined here solidly engraved into stone yet. So as always, I’m open to constructive feedback.

As forewarned, I have a few more words on integrating Flex and Bison with each other and into one’s own library. First, Flex and Bison turned out not to be exactly simple to configure, especially if none of the generated symbols should be exported from a library or clash with a possible second parser linked into the same library or program. Also some fiddling is required to pass on proper line count information from the lexer to the parser, getting character counts as well is non-trivial but lucky wasn’t strictly needed for Rapicorn expressions. The simplest setup I managed to come up with works as follows:

  sinfex.hh     # public API
  sinfeximpl.hh # internal structure definitions
  sinfex.cc     # evaluator implementation
  sinfex.l      # scanner rules for Flex
  sinfex.y      # parser rules for Bison
  sinfex.lgen   # generated by Flex
  sinfex.ygen   # generated by Bison

The only compiled file in this list is sinfex.cc which includes sinfex.lgen and sinfex.ygen as part of an anonymous C++ namespace. A linker script ldscript.map used when finally linking the library prevents anonymous symbols from being exported. The anonymous namespacing of everything declared in sinfex.lgen and sinfex.ygen is what prevents clashes with a possible second parser linked into the library. This isn’t as elegant as i was hoping for, but at least effective in a practical sense. There unfortunately is no way to configure Flex or Bison to generate only static functions and variables. And yes, I have also looked into variants like flex++, bison++, bisonc++, byacc, etc, but they usually show much of the same problems and also tend to make matters worse by generating more files or more complex code.

Apr 052007

The generation rules for the Rapicorn website are now finally in place:

-> rapicorn.org <-

This should help to clear up some of the motivations behind the Rapicorn project, in particular how it relates to Gtk+ and why it aims at implementing features that are to some extend already covered by Gtk+ or other projects like GnomeCanvas.
Here is a small excerpt to wetten the appetite:

	These days Gtk+ is [...] a very successful toolkit.
	However, maintenance and continuous development of a project at this scope
	and scale come at significant costs in terms of development flexibility
	and compatibility. It is not anymore a suitable place for evolution of new
	experimental GUI technologies and quick paradigm shifts. So radically new
	toolkit approaches or design ideas now need to be explored elsewhere and
	only if successful and applicable can reflect back onto Gtk+ development.
	Rapicorn explores some approaches which are simply different from
	established Gtk+ paradigms.