Jul 022015
 

Rapicorn 'visitor' branch

Trying to keep it up, here’s an update on recent developments in Rapicorn and Beast.

Git Branches

For now, Rapicorn and Beast are using Git branches the following way:

  • Topic branches are created for each change. Where possible, commits should compile and pass all tests (i.e. pass make check installcheck).
  • Once completed, topic branches are merged into the master branch. For intermediate merges of huge branches, I’ve recently been adding [ongoing] to the merge commit message. As an aside, branch merges should probably be more elaborate in the future to make devlog articles easier to write and potentially more accurate.
  • The master branch must always compile and pass all tests.
  • OpenHub: The OpenHub repo links have been adjusted to point at Rapicorn’s and Beast’s master branch. Because of problems with spammers and a corresponding reimplementations, code statistic updates on the OpenHub are platform currently stalled however.
    https://www.openhub.net/p/beast-bse
    https://www.openhub.net/p/rapicorn

Hello and goodbye clang++

Rapicorn C++11 code currently compiles with g++-4.7 and upwards. An initial attempt was made at making the C++11 code compile with clang++-3.4 but the incompatibilities are currently too numerous. A few good fixes have come out of this and are merged into master now, but further work on this branch probably has to wait for a newer clang++ version.

New Widgets

Rapicorn is growing more widgets that implement state rendering via SVG element matching. Recent additions are:

  • LayerPainter – A container that allows rendering widgets on top of each other.
  • ElementPainter – A container that displays state dependent SVG image elements.
  • FocusPainter – An ElementPainter that decorates its child according to focus changes.

IDL Improvements

Several changes around Rapicorn’s IDL compiler and support code made it into master recently:

  • The IDL layer got bind() and connect() mthods (on the ObjectBroker interface). This models the IDL setup phase after the zeromq API. Beast makes use of this when setting up IDL interface layers in the UI and in BSE.
  • The Python binding was rewritten using Cython. Instead of invoking a heap of generated Python glue code and talking to the message passing interfaces directly, the Python binding now sits on top of the C++ binding. This makes the end results operate much faster, is less complex on the maintenance side and more functional with regards to the Python API offered. As an added bonus, it also eases testing of the C++ bindings.
  • And just to prove the previous point, the new Cython port uncovered a major issue lurking in the C++ IDL handling of objects in records and sequences. At least since the introduction of remote reference counting, client side object handles and server side object references are implemented and treated in fundamentally different ways. This requires records (struct) and sequences (std::vector) to have separate implementation types on the client and server sides. Thus, the client and server types are now prefixed with ClnT_ and SrvT_ respectively. Newly generated typedef aliases are hiding the prefixes from user code.
  • IDL files don’t need ‘ = 0 ‘ postfixes for methods any more. After all, generating non-virtual methods wasn’t really used anyway.
  • The Enum introspection facilities got rewritten so things like the Enum name are also accessible now. This area probably isn’t fully finished yet, for future Any integration a more versatile API is needed still.
  • Auxillary information for properties is now accessible through an __aida_aux_data__() method on generated interfaces.
  • Generated records now provide a template method __accept__<>(Visitor) to visit all record fields by value reference and name string. Exemplary visitor implementations are provided to serialize/deserialize records to XML and INI file formats.

BEAST Developments

For the most part, changes in Beast are driving or chasing Rapicorn at the moment. This means that often the tip of Rapicorn master is required to build Beast’s master branch. Here is why:

  • Beast now uses RAPIDRES(1) to embedd compressed files. Rapicorn::Blob and Rapicorn::Res make these accessible.
  • Beast now makes use of Rapicorn’s IDL compiler to generate beastrc config structures and to add a new ‘Bse‘ IDL layer into libbse that allows the UI code to interface with Bse objects via C++ interfaces. Of course, lots of additional porting work is needed to complete this.
  • Beast procedures (a kind of ‘remote method’ implemented in C with lots of boilerplate code) are now migrated to C++ methods one by one which majorly simplifies the code base, but also causes lots of laborious adaptions on the call sites, the UI and the undo system. An excursion into the changes this brings for the undo implementation is provided in DevLog: A day of templates.
  • The GParamSpec introspection objects for properties that Beast uses for GUI generation can now be constructed from __aida_aux_data__()  strings, which enabled the beastrc config structure migration.
  • An explanatory file HACKING.md was added which describes the ongoing migration efforts and provides help in accessing the object types involved.

What’s next?

For the moment, porting the object system in Beast from GObject to IDL based C++11 interfaces and related procedure, signal and property migrations is keeping me more than busy. I’ll try to focus on completing the majority of work in this area first. But for outlooks, adding a Python REPL might make a good followup step. 😉

Aug 062013
 

Reblogged from the Lanedo GmbH blog:

Documentation Tools

Would you want to invest hours or days into automake logic without a use case?

For two of the last software releases I did, I was facing this question. Let me give a bit of background. Recently the documentation generation for Beast and Rapicorn fully switched over to Doxygen. This has brought a number of advantages, such as graphs for the C++ inheritance of classes, build tests of documentation example code and integration with the Python documentation. What was left for improvement was a simplification of the build process and logic involved however.

Generating polished documentation is time consuming

Maintaining the documentation builds has become increasingly complex. One of the things adding to the complexity are increased dependencies, external tools are required for a number of features: E.g. Doxygen (and thus Qt) is required to build the main docs, dot is required for all sorts of graph generation, python scripts are used to auto-extract some documentation bits, rsync is used for incremental updates of the documentation, Git is used to extract interesting meta information like the documentation build version number or the commit logs, etc.

More complexity for tarballs?

For the next release, I faced the task of looking into making the documentation generation rules work for tarballs, outside of the Git repository. That means building in an environment significantly different from the usual development setup and toolchain (of which Git has become an important part). At the very least, this was required:

  • Creating autoconf rules to check for Doxygen and all its dependencies.
  • Require users to have a working Qt installation in order to build Rapicorn.
  • Deriving a documentation build version id without access to Git.
  • Getting the build dependencies right so we auto-build when Git is around but don’t break when Git’s not around.

All of this just for the little gain of enabling the normal documentation re-generation for someone wanting to start development off a tarball release.
Development based on tarballs? Is this a modern use case?

Development happens in Git

During this year’s LinuxTag, I’ve taken the chance to enter discussions and get feedback on development habits in 2013. Development based on tarballs certainly was the norm when I started in Free Software & Open Source, that was 1996. It’s totally not the case these days. A large number of projects moved to Git or the likes. Rapicorn and Beast have been moved to Git several years ago, we adopted the commit style of the Linux kernel and a GNU-style ChangeLog plus commit hash ids is auto-generated from Git for the tarballs.
Utilizing the meta information for a project living in Git comes naturally as time passes and projects get more familiar with Git. Examples are signed tags, scripts around branch-/merge-conventions, history greping or symbolic version id generation. Git also significantly improves spin-off developments which is why development of Git hosted projects generally happens in Git branches or Git clones these days. Sites like github encourage forking and pulling, going back to the inconveniences of tarball based development baring any history would be a giant leap backwards. In fact, these days tarballs serve as little more than a transport container for a specific snapshot of a Git repository.

Shipping pre-built Documentation

Taking a step back, it’d seem easier to avoid the hassle of adapting all the documentation build logic to work both ways, with and without Git, by simply including a copy of the readily built result into the tarball. Like everything, there’s a downside here as well of course, tarball size will increase significantly. Just how significantly? The actual size can make or break the deal, e.g. if it changed by orders of magnitude. Let’s take a look:

  • 6.0M – beast-0.8.0.tar.bz2
  • 23.0M – beast-full-docs.tar-bz2

Uhhh, that’s a lot. All the documentation for the next release totals around almost four times that of the last tarball size. That’s a bit excessive, can we do better?

It turns out that a large portion of the space in a full Doxygen HTML build is actually used up by images. Not the biggest chunk but a large one nevertheless, for the above example, we’re looking at:

  • 23M – du -hc full-docs/*png
  • 73M – du -hc full-docs/

So, 23 out of 73 MB for the images, that’s 32%. Doxygen doesn’t make it too hard to build without images, it just needs two configuration settings HAVE_DOT = NO and CLASS_DIAGRAMS = NO. Rebuilding the docs without any images also removes a number of image references, so we end up with:

  • 42M – slim-docs/

That’s a 42% reduction in documentation size. Actually that’s just plain text documentation now, without any pre-compressed PNG images. That means bzip2 could do a decent job at it, let’s give it a try:

  • 2.4M – beast-slim-docs.tar-bz2

Wow, that went better than expected, we’re just talking about 40% of the source code tarball at this point. Definitely acceptable, here’re the numbers for the release candidates in direct comparison, with and without pre-built documentation:

  • 6.1M – beast-no-docs-0.8.1-rc1.tar.bz2
  • 8.6M – beast-full-docs-0.8.1-rc1.tar.bz2

Disable Documentation generation rules in tarballs

Now that we’ve established that shipping documentation without graphs results in an acceptable tarball size increase, it’s easy to make the call to include full documentations with tarball releases. As a nice side effect, auto-generation of the documentation in tarballs can be disabled (not having the Git tree and other tools available, it’d be prone to fail anway). The only thing to watch out for is a srcdir!=builddir case with automake, as in Git trees documentation is build inside builddir, while it’s shipped and available from within srcdir in tarballs.

Pros and Cons for shipping documentation

  • Con: Tarball sizes increase, but the size difference seems accaptable, practical tests show less than 50% increase in tarball sizes for documentation excluding generated graphics.
  • Con: Tarball source changes cannot be reflected in docs. This mostly affects packagers, it’d be nice to receive substantial patches in upstream as a remedy.
  • Pro: The build logic is significantly simplified, allowing a hard dependency on Git and skipping complex conditionals for tool availability.
  • Pro: Build time and complexity from tarballs is reduced. A nice side effect, considering the variety of documentation tools out there, sgml-tools, doxygen, gtk-doc, etc.

For me the pros in this case clearly outweigh the cons. I’m happy to hear about pros and cons I might have missed.

Prior Art?

Looking around the web for cases of other projects doing this didn’t turn up too many examples. There’s some probability that most projects don’t yet trade documentation generation rules for pre-generated documentation in tarballs.

If you know projects that turned to pre-generated documentation, please let me know about them.

I’m also very interested in end-user and packager’s opinions on this. Also, do people know about other materials that projects include pre-built in tarballs? And without the means to regenerate everything from just the tarballs?

Apr 072008
 

There have been several requests about hosting Gtk+ (and GLib) as a Git repository recently and since that topic has come up more and more often, I meant to write about this for quite some time.

Let’s first take a look at the actual motivation for such a move. There are a good number of advantages we would get out of using Git as a source code repository for Gtk+ and GLib:

  • We can work offline with the Gtk+ history and source code.
  • We can work much faster on Gtk+ even when online with tools such as git-log and git-blame.
  • It will be much easier for people to branch off and do development on their own local branches for a while, exchange their code easily with pulling branches between private repositories, etc. I.e. get all the advantages of a truly distributed versioning system.
  • With Git it’s much easier to carry along big Gtk+ changes including history by using cherry picking and (interactive) rebasing.
  • We can make proper public backups of the source code repositories. This ought to be possible already via svnsync, but isn’t for svn.gnome.org because we run an svn 1.3.x server instead of an svn 1.4.x server that is required by svnsync. (Yes, this issue has been raised with the sysadmins already.)

A quick poll on IRC has shown that all affected core maintainers are pretty much favoring Git over SVN as a source code repository for GLib/Gtk+.

However, here are the points to be considered for not moving the repositories to Git (just yet):

  • Complexity; Git is often perceived to be harder to use than SVN, there are several attempts to mitigate this problem though: Easy Git Giggle yyhelp.
    With some of the above, Git is as easy to use as SVN is, so Git complexity doesn’t need to be holding anyone off at this point.
  • Git may be stable and mature these days, but git-svn is not there yet. It is generally good enough to create local Git mirrors of SVN repositories and work from those to have most of the Git convenience on top of an SVN project.
    But git-svn has seen structural changes recently, quite some rewriting and bug fixing that indicate it’s still too much in flux for the one-and-only SVN->Git migration for projects at the scale of Gtk+. This is not meant as criticism on git-svn, fairly the opposite actually. It’s good to see such an important component be alive and vivid. All issues I’ve raised with the maintainer so far have been addressed, but it seems advisable to wait for some stabilization before trusting all the Gtk+ history to it.
  • Gitweb interfaces already exist for GLib/Gtk+ mirrors, for example on testbit: Testbit Git Browser.
    These can be used for cloning which is much faster than a full git-svn import. Alternatively, shallow git-svn imports can be created like this:

      git-svn clone -T trunk -b branches -t tags -r 19001 svn://svn.gnome.org/svn/gtk+
    

    This will create a repository with a mere ~1000 revisions, including all changes since we branched for 2.12. We’re using such a shallow repository for faster cloning of our GSEAL() development at Imendio: view gtk+.git.

  • In summer 2006, we’ve had the first test migration of all of GNOME CVS to SVN, in December 2006 we’ve had the final migration. During that period, the Beast project stayed migrated to SVN to work out the quirks from the CVS->SVN migration before we migrate all other GNOME modules and have to fix up everyones converted modules. There were quite some issues that needed fixing after the initial test migration and in the end we had to rebuild the Beast development history from pieces. Preventing the other GNOME modules from such hassle was the entire point in migrating Beast early on, so I’m not complaining.
    However, given the size and importance of GLib/Gtk+, the development history of those projects shouldn’t be put at a similar risk. That is, GLib/Gtk+ shouldn’t be pioneering our next source repository migration, let some other small project do this and work out the quirks first.
  • git-svn already provides a good part of the Git advantages to developers while Gtk+ stays hosted in SVN. Albeit due to mismatching hashes, syncing branches between distinct git-svn checkouts of different people is tedious. Setting up an “official” git-svn mirror on git.gtk.org could probably help here. Also, to ease integration, jhbuild could be extended to use git-svn instead of svn commandos to update SVN modules, if the current module has a .git/ subdirectory instead of a .svn/ subdirectory.

The bottom line is, there’s a good number of advantages that Git already can (or could) provide for our development even without migrating the repositories to Git right away. When exactly will be a good point for migrating GLib/Gtk+ and possibly other GNOME modules might not be an easy call, but right now is definitely too early.

Sep 192007
 

I’ve had to add another bit to yyhelp to make my everyday coding more convenient, so it got support for git-svn(1). The yypull and yypushpull commands will now detect a git(7) repository set up with git-svn, and unless the repository also has a regular git upstream, pulling and pushing will be carried out via git-svn rebase and git-svn dcommit. yyinfo shows the update method being used for a repository and also displays the SVN HEAD revision now if any. That, plus miscellaneous changes make version 0.6 of yyhelp:

	Overview of Changes in YummyYummySourceControl-0.6:

	* for git-svn repositories, use rebase and dcommit to pull and push
	* fixed argument parsing for yybranch and yybranchdel
	* support revision argument for yydiff
	* check revision argument to yywarp

Sep 082007
 

The YummyYummySourceControl script got a few buglet fixups during the last days, so I’ve put up version 0.5b: yyhelp.
Here’s the obligatory list of changes since the last version:

	Overview of Changes in YummyYummySourceControl-0.5b:

	* update stat info for yycommit to ignore touched but unmodified files
	* use full paths for yyrestore to fix spurious file resurrections in parent dirs
	* fixed bogus error messages on yycommit

Jun 302007
 

Starting with the discontinuation of cogito, i began to wonder about my future git usage patterns. On the one hand, i have become quite used to the convenience of some of the cogito commands, on the other hand i found myself using or looking up git commands every other day. Since the discontinuation, cogito cannot be expected to remedy those work-flow interruptions any time soon, so i actually started to cook up my own small set of convenience wrappers. I’m adding those on demand whenever i need a new source repository command, and i try hard to keep it simple, avoid giving the wrappers any options and keep an eye on them being very shell-completion friendly.

The GPLv3 licensed result is a single shell script plus a handful links for invocation: yyhelp.

For the curious, the prefix ‘yy’ was chosen to allow conflict free shell completion. I’ll quote the yyhelp information here, so people can decide if it’s interesting for them and also get to see the installation instructions. I’m not entirely sure where to take this shallow wrapper in the future, however keeping it simple is really the main incentive behind it. Any deeper logic required should rather be filed as git-core requests. If you want to talk to me about YummiYummiGit, drop me a line via email, or try #git on irc.freenode.net:6667.

	YummiYummiGit                                                     YummiYummiGit

	NAME
		YummiYummiGit - The simplest git wrapper ever, yet convenient

	SYNOPSIS
		yyadd          [FILES...] - add files to git repository
		yybranch    [-f]  - add branch (forcable)
		yybranchdel [-f]  - delete branch (forcable)
		yyChangeLog               - show git log in ChangeLog style
		yycommit       [FILES...] - commit current working tree
		yydiff         [FILES...] - show committable differences in working tree
		yygc                      - repack and prune repository
		yyhelp                    - display yy* help information
		yylsbranches              - list branches
		yylstags                  - list tags
		yypull                    - pull upstream sources
		yypushpull                - push & pull upstream sources
		yyremove       [FILES...] - remove files from git repository
		yyreset                   - reset (revert to HEAD) all files in the tree
		yyrestore      [FILES...] - forcefully recheckout specific files
		yystatus                  - display working tree status
		yytag                - add tag
		yytagdel             - delete tag
		yyuncommit                - undo the last commit (must be unpushed)
		yyview                    - browse & navigate the history
		yywarp            - checkout new branch

	HISTORY
		YummiYummiGit was created as a very shallow wrapper around git-* tool
		option variants, to simplify common cases.
		Depending on programming habits, YummiYummiGit may or may not suit your
		daily needs. It is in any case not meant as a full replacement for the
		git interface (there is e.g. no yyclone) and subject to change.

	INSTALLATION
		To install YummiYummiGit, copy yyhelp into a bin/ directory from $PATH
		and invoke: ./yyhelp --create-aliases

	YummiYummiGit-0.4                                                 YummiYummiGit