Getting started with CppSharp

CppSharp is one cool project that generates C# bindings for C++ libraries. It’s under active development now and I finally got around to getting a sample up and running on OSX. Here’s a running example from following the Getting Started guide on the github page.

First, you’ll need to set yourself up with Mono. I set mine up through a Homebrew keg, and it set me up with a 32-bit runtime. So that means we need to make sure that C++ library we interface with is compiled for 32-bits.

Here’s a sample library. Make it and extract the archive into the working directory. You’ll see the following structure get unpacked:

libsample
libsample/include
libsample/lib

Next, you’ll need to clone and build CppSharp. The Getting Started page is the quickest way to build it. In short, you

  1. Clone the particular revisions of LLVM and Clang into the tools subdirectory.
  2. Configure and build LLVM with CMake, enabling C++11 and libc++ standard library support by adding cache entries LLVM_ENABLE_CXX11 and LLVM_ENABLE_LIBCXX.
  3. Configure and build CppSharp. They use premake as their build system, which is a lot simpler to deal with.

The result of building CppSharp is a set of .dlls. The easiest thing is to copy all the resulting .dll files to the working directory of where the executable for your binding generating code will go. Otherwise, you will need to add the directory containing these libraries to MONO_PATH.

To generate your bindings for libsample, you implement ILibrary. Here’s a barebones example that compiles and runs. It assumes you’ve got the following in your working directory:

lib/Release_x32/
libsample/{include,lib}

The binding generator is Sample.exe. parses the sample library and spits out bindings in the out/ directory.

Then you can proceed to use your C++ assets from C# — TestSample.exe compiled in the barebones example above will show you how. You just have to make sure the .dylib is in the working directory or visible through LD_LIBRARY_PATH.

Now that I’ve got this up and running, I’m looking into experimenting with QtSharp. Now that I kind of see what’s going on, it looks like the developer has committed bindings for Qt5. I guess the path of least friction is to do the same rather than mess around with Qt4. I just built 32-bit Qt5 overnight last night and will be testing it shortly.

Custom alternative search path for a.vim

If you’re a C++ developer, you’ll spend a lot of time toggling back and forth between your header (.h) files and the corresponding implementation (.cpp) files. In object-oriented programming, classes will typically map one-to-one with the files — Widget objects will typically be split into Widget.h holding declarations and Widget.cpp holding the actual implementations. On the other hand, when you write templated code, it needs to go in the header file. So although the code is split up between .h and .cpp, they really should be viewed as a single unit. Typically, I’ll be coding in vim with multiple tabs, each tab holding a .h/.cpp pair split like so:

alternateHeaderImpl

Opening files can be a pain, especially when your code is tucked away in nested directory structures. You potentially waste a lot of time typing in the filename you want to open. It’s not so bad if you keep your .h and .cpp files together. In this case, you can use the a.vim (version 2.18 at the time of this writing) plugin. Open up the header file. Then, a simple :AV will open up the corresponding .cpp file in a new split pane window.

It turns out a.vim searches a handful of locations relative to the file from which you are toggling for the corresponding alternate file. For example, I’m in Widget.h, and to toggle to the .cpp file, a.vim will look in the current directory as well as ../src (it’s typical for some projects that build libraries to have separate src and include directories).

If your project’s directory structure is a little more complicated (as in the screenshot above where you have to go between src/CGAL_Qt4 and include/CGAL/Qt), you can add a new relative path to search for the alternate file. Put the following in either your user .vimrc or a project-specific .vimrc:

let g:alternateSearchPath = 'sfr:../../../src/CGAL_Qt4,sfr:../../include/CGAL/Qt'

Note that the search paths start with sfr: and are comma-separated.

Actually, setting your own g:alternateSearchPath will blow away the defaults, but you can patch a.vim to just append to the default search paths in a way you usually do with the PATH variable:

--- a.vim	2013-12-23 13:10:42.458540027 -0800
+++ a.vim.new	2013-12-23 13:10:59.270540049 -0800
@@ -108,6 +108,8 @@
 " a path in their [._]vimrc. 
 if (!exists('g:alternateSearchPath'))
   let g:alternateSearchPath = 'sfr:../source,sfr:../src,sfr:../include,sfr:../inc'
+else
+  let g:alternateSearchPath = g:alternateSearchPath . 'sfr:../source,sfr:../src,sfr:../include,sfr:../inc'
 endif
 
 " If this variable is true then a.vim will not alternate to a file/buffer which

Note the dot operator in this vim configuration language is used for string concatenation.

Wrap Fortran code to use in C++

There’s a lot of useful code in Fortran, and a lot of people actually prefer to work in Fortran, so as a C++ developer, it would be nice to be able to call their code from C++. Here’s a small CMake example for how to do it so you can follow along below: Download

I put the example together and tested it with the following versions of these tools:

  • g++ 4.7.3
  • gfortran++ 4.7.3
  • cmake 2.8.10.1

From a high level, the example does the following:

  1. Create a one-function Fortran library.
  2. Set up the interface to the Fortran library in C++ code.
  3. Link to the Fortran library and call the Fortran function.

Something to note is that all the parameters to Fortran function is a pointer type, even for non-array data types.

To demystify the part of the example that sets up the interface to Fortran code:

extern "C" 
{
    void average_(int *n, double *a, double *ave);
}

The extern "C" part tells the C++ compiler not to expect a mangled name but rather take the function name as it is. C++ renames function names at compile-time in order to support function overloading automatically, but this behavior should be disabled when interfacing with code from other languages like Fortran.

Note that the average function has an underscore appended to it, whereas it does not have it in the Fortran file where it is defined. This is default behavior of gfortran. I think I read that there is a flag to disable this behavior, but I think it’s nice to make it clear that it is a Fortran function.

Boost.Python hello world example using CMake

Somehow it seems I’m the only one among my labmates who is working mainly in C++/CMake and not Python. I want to do more than help them by discussing code — I want to share my code. So let’s use Boost.Python wrap our C++ library so it can be called from Python code. This tutorial is based on the Hello world example in the Boost.Python docs and uses CMake.

To begin with, here’s a mention of the versions of everything I’m using when I wrote this post so you can judge if stuff is still applicable:

  • Ubuntu 13.04
  • cmake version 2.8.10.1
  • gcc version 4.7.3
  • Boost 1.49 with Python component
  • Python 2.7.4

Here, we create a simple C++ library called libgreet, and we make a wrapper library called greet_ext. The wrapper library can be loaded as a Python module that exposes the function in the C++ library. The files are shown below, and the full example can be downloaded here. To use it, just download it and run these commands:

tar xf BoostPythonHelloWorld.tar.gz
cd BoostPythonHelloWorld
cmake .
make
./test.py

greet.h:

#ifndef GREET_H
#define GREET_H
char const* greet( );
#endif // GREET_H

greet.cpp:

#include "greet.h";

char const* greet( )
{
    return "Hello world";
}

greet_ext.cpp:

#include "greet.h";
#include <boost/python.hpp>;

BOOST_PYTHON_MODULE(greet_ext)
{
    using namespace boost::python;
    def( "greet", greet );
}
  • The parameter to BOOST_PYTHON_MODULE must match the name of the wrapper library we import in Python below. Note where greet_ext appears throughout the different files.

CMakeLists.txt:

cmake_minimum_required( VERSION 2.8 )

project( BoostPythonHelloWorld )

# Find necessary packages
find_package( PythonLibs 2.7 REQUIRED )
include_directories( ${PYTHON_INCLUDE_DIRS} )

find_package( Boost COMPONENTS python REQUIRED )
include_directories( ${Boost_INCLUDE_DIR} )

# Build our library
add_library( greet SHARED greet.cpp )

# Define the wrapper library that wraps our library
add_library( greet_ext SHARED greet_ext.cpp )
target_link_libraries( greet_ext ${Boost_LIBRARIES} greet )
# don't prepend wrapper library name with lib
set_target_properties( greet_ext PROPERTIES PREFIX "" )
  • The line with PythonLibs locates the python includes, which are necessary because the Boost.Python header pulls in a config file from there.
  • By default, CMake builds library targets with the lib prefix, but we actually want our wrapper library to be named greet_ext.so so we can import it with the name below.

test.py:

#!/usr/bin/python

import greet_ext

print greet_ext.greet()
  • This python script works for me if I put it in the same folder as the greet_ext.so wrapper library after it is built.

What I need to figure out
This is a nice, simple example to get started with, but I have to figure out more details.

  • How do I wrap functions that take/return C++ types? How about pointers and references to those types? How about global operators?
  • How does a wrapper library deal with C++ templates? Is it the case that you have to instantiate some of those template classes are just work with those when you’re in Python?
  • Is Boost.Python the right choice? There’s a fairly recent webpage that discusses SWIG versus Boost.Python, and I think I’ve made the right choice for my project.

I’m looking forward to working on this further. This probably will be one of the things I work on this coming Christmas break.

P.S. God WordPress is so annoying when it escapes your code when you toggle between visual and text editing modes. Note to self, don’t switch modes.

Using CMake to create a bundle for a Qt4/VTK/CGAL project

I write a GUI application that uses Qt4 as a frontend as well as other libraries in the backend, like VTK, CGAL, Boost, and what-have-you, and because I use CMake as my build system, it’s quite manageable for developers on either Windows, OS X, or Linux to build from source code. It gets a little more complicated when I need to build distributable executables for non-developers to use, and so far I do not have a good way to do this written into my main CMakeLists.txt. However, I was recently able to build a bundle, which is OS X’s distributable executable that you can pass around, put into your Applications folder, and double-click to run, and this blog has some details that might be useful, especially if you have experience building distributables on a Linux environment.

To begin, note the versions of things that I’m using:

  • OS X 10.8.5
  • CMake 2.8.12
  • CGAL 4.3
  • Qt 4.8.5
  • VTK 5.10.1

The general approach is the following:

  1. Build CMake
  2. Build the library dependencies using our CMake
  3. Build our project
  4. Install our project in a portable way – in Linux, the executable and its library dependencies are copied into an install folder. The executable is hardwired with a path (RPATH) that points to where to look for library dependencies. The executable is portable in the sense that the RPATH can be set to a path relative to the executable (with the @ORIGIN keyword), so you can relocate the executable and still have it find the libraries it needs.
    In OS X, the distributable executable is a bundle, which is basically a folder containing the executable and the libraries in a particular structure. CMake has a module to set this up automatically on install. This module also performs bundle fixup where library dependencies are pointed to the ones in the bundle folder rather than the ones on your system.

Why are we doing it this way?
This approach might seem like overkill: Homebrew is a package manager that lets you install CMake and all the libraries you probably would need. However, there is an issue with the last step in the general approach. Basically, when CMake copies the dependency libraries into the bundle, it messes up. The only way I’ve found how to get this working is not to use Homebrew’s CMake but use our own CMake.

Okay, let’s step through.

1. Build CMake
I’m just going to assume you got the source package, built and installed it, and added the bin folder to your path. One further thing that I did was edit a file that was giving me errors during bundle fixup. In share/cmake-2.8/Modules/BundleUtilities.cmake:

--- BundleUtilities.orig.cmake 2013-11-28 01:11:34.000000000 -0800
+++ BundleUtilities.cmake 2013-11-02 20:52:04.000000000 -0700
@@ -585,9 +585,9 @@
 endif()
 endforeach()

- if(BU_CHMOD_BUNDLE_ITEMS)
+ #if(BU_CHMOD_BUNDLE_ITEMS)
 execute_process(COMMAND chmod u+w "${resolved_embedded_item}")
- endif()
+ #endif()

 # Change this item's id and all of its references in one call
 # to install_name_tool:

2. Build the library dependencies using our CMake
I won’t write anything about this except post the line that I used to configure Qt4. Basically, I lifted the configuration from the brew script for qt4, which seemed to have some OS X specific things in it:

./configure -system-zlib -confirm-license -opensource -nomake demos -nomake     examples -cocoa -fast -release -no-3dnow -no-ssse3 -platform unsupported/macx-  clang -nomake docs -qt3support -prefix ${PREFIX}

3. Build our project
I also won’t write anything about this part since it is just a matter of using find_package to locate and pull in the required libraries, and I assume you’ve got your project configured.

4. Install our project in a portable way
So here we throw in a few extra lines into CMakeLists.txt. A full example is available from the CMake wiki here. Say your build target is named MyProgram, then you would include some install instructions that look like this:

install( TARGETS
MyProgram
BUNDLE DESTINATION .
RUNTIME DESTINATION bin
LIBRARY DESTINATION lib
ARCHIVE DESTINATION lib/static
)

Now I already had an install directive, but line 3 was the new line to add. Basically, if you’re building on any other OS other than OS X, your executable will end up in ${CMAKE_INSTALL_PREFIX}/bin, otherwise, it will go into a bundle folder named after the build target, which in this case will be ${CMAKE_INSTALL_PREFIX}/./MyProgram.app. This explains the following lines that come afterwards:

set( APPS "\${CMAKE_INSTALL_PREFIX}/MyProgram.app" )
set( DIRS "\${CMAKE_INSTALL_PREFIX}/lib" )
set(qtconf_dest_dir MyProgram.app/Contents/Resources)
install(CODE "
file(WRITE \"\${CMAKE_INSTALL_PREFIX}/${qtconf_dest_dir}/qt.conf\" \"\")
" COMPONENT Runtime)
install( CODE "
include(BundleUtilities)
fixup_bundle( \"${APPS}\" \"\" \"${DIRS}\" )
" Component Runtime )

Lines 1-3 just define some helper variables that get used in the following lines. ${APPS} points to the bundle folder and ${DIRS} points to where to look for libraries it depends on. I set this to point to where I install my project library build targets.

Lines 4-6 generates a file in the bundle folder that is necessary to get Qt applications working properly.

Lines 7-11 performs the bundle fixup: when the bundle is initially installed, the executable is linked to required libraries via paths specific to your system. These include libraries external to your project and library build targets within your project. fixup_bundle will identify the libraries from the bundle pointed to by ${APPS}, copy them into the bundle folder, then relink the executable to these copies instead of the ones installed on your system.

It’s a little weird how BundleUtilities actually finds out the library dependencies. On the one hand, it is supposed to look in ${DIRS} to find libraries, but it might not be able to, and for my project, I had to manually copy the ones it missed into the bundle folder and rerun the cmake install to get it fixed up. Also, although I didn’t specify VTK and the others in ${DIRS}, it was able to find and copy these in. In short, during bundle fixup, some errors may occur that you should pay attention to, and if you see missing libraries that should have been copied, you can probably get around by manually copying them into the bundle right next to the executable and rerunning the install.

Additional notes

  1. In Linux, you could inspect dynamic library dependencies with ldd. In OS X, you can make use of otool -L to do the same thing.
  2. In OS X, you can do some manual fixup of library dependencies as BundleUtilities does by using install_name_tool.

Hopefully, reporting all of this in one place was helpful to someone. CMake really works wonders, but the only thing is hunting down enough information to be able to do what you want can take more time than expected. Also, this is my first serious take on building projects on OS X so there was a bit of a learning curve coming from Linux, but it’s not so bad now.

Feel free to leave questions in the comments.

Happy thanksgiving!

Blender notes from a Meshlab user

When I first started working with meshes, the first tool that people told me to get started with was Meshlab. It’s an open source mesh processing tool that lets you visualize your meshes, do some mesh editing, and also lets you pick from a kitchen sink full of filters with various tools. See my previous post for an example of computing triangle surface area on a mesh by putting together a few of these filters. However, if you want to get more “hands-on” with your mesh, touch up certain areas in a fine-grained way, Meshlab disappoints.

This is where Blender comes in. It’s another nice open-source tool that everyone will recommend, but it’s loaded with so much stuff that it is probably less intuitive to use. I’m not a professional at 3d modeling but I do need to curate the data that I work with in my research, so here’s a few notes about Blender that I’ve taken that have been helpful to me in my mesh editing work.

You can focus on selected primitives by right-clicking in Edit mode and hitting PERIOD.

You can select boundaries by going into Edit mode, then going to Select -> Edges -> Non-manifold. Blender actually has a nice scripting environment

You can fill holes by selecting a hole and pressing ALT + F.

You can hit TAB to go between Object and Edit mode. Tab out of Edit mode before exporting your mesh so that any changes are committed to the model first.

You can open the properties panel by hitting N. This is context-dependent what it contains. You’ll watch some videos or read some tutorial and it will refer you to this, but it might be mysterious at first how this is even reached, so now you know.

For context-dependent actions on edges, press CTRL + E.

For context-dependent actions on vertices, press CTRL + V.

Importing/exporting with OBJ file format works best if you want to keep mesh data consistent across Meshlab and Blender.

For reference, the latest version of Blender at the time of this post is 2.69. There’s a lot of good tutorial videos for users, such as this one. I just wished that there was a web page to go with some of these so I don’t have to browse a lengthy video to review content, hence this mini- cheat sheet of things that were helpful to me.

Compute area for each triangle facet in meshlab

Meshlab is probably the first program that comes to mind when you think of working with meshes. Depending on what you actually want to do, you might be completely disappointed or completely blown away by what meshlab functionality has to offer. This post is an example of the latter, where I needed to do some processing on triangles that were too big.

Meshlab lets you compute and associate numbers to facets: this is called facet quality. If you go to Filters > Quality Measure and Computations > Per Face Quality Function, you can key in a function in terms of the vertex positions, and the function will be evaluated and stored at each facet. Here’s a useful equation that will calculate the area of each triangle:


sqrt( ((sqrt((x1-x0)^2 + (y1-y0)^2 + (z1-z0)^2) + sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) + sqrt((x2-x0)^2 + (y2-y0)^2 + (z2-z0)^2)) / 2) * (((sqrt((x1-x0)^2 + (y1-y0)^2 + (z1-z0)^2) + sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) + sqrt((x2-x0)^2 + (y2-y0)^2 + (z2-z0)^2)) / 2) - sqrt((x1-x0)^2 + (y1-y0)^2 + (z1-z0)^2)) * (((sqrt((x1-x0)^2 + (y1-y0)^2 + (z1-z0)^2) + sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) + sqrt((x2-x0)^2 + (y2-y0)^2 + (z2-z0)^2)) / 2) - sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2)) * (((sqrt((x1-x0)^2 + (y1-y0)^2 + (z1-z0)^2) + sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) + sqrt((x2-x0)^2 + (y2-y0)^2 + (z2-z0)^2)) / 2) - sqrt((x2-x0)^2 + (y2-y0)^2 + (z2-z0)^2)))

Basically, I used this to be able to select and subdivide regions of the mesh that are too sparsely sampled. After computing the area, I select the triangles based on a threshold on the face quality (Filter > Selection > Select Faces by Face Quality), then run loop subdivision (Filters > Remeshing, Simplification, and Reconstruction > Subdivision Surfaces: Loop).

The triangles on this mesh are colored based on their size. Hot colors are smaller.

The triangles on this example mesh are colored based on their size. Hot colors are smaller.

The idea is that only the large triangles will be affected: the small faces will remain the same, which is important because I already have data attached to those vertices.