Author Archive

Recreate VMDK descriptor with Fusion

March 14, 2017 Leave a comment

If you have a VMDK flat file but are missing the descriptor file (or you renamed the flat file), you will need to regenerate it.  This can be done on an ESX host, but if you have VMware Fusion, you can also do with with the vmware-vdiskmanager CLI tool included with the product.

Get the size of the VMDK that is missing a descriptor:

ls -l /path/to/some/disk-image.vmdk

Make a new empty disk image near the .vmdk, which will create the descriptor file as well.  Be sure to use a different directory so as not to overwrite the existing .vmdk:

/Applications/VMware -c -s 1GB -a lsilogic -t 4 /path/near/my/disk-image.vmdk

-c – creates a new disk

-s – size is 1 GB

-a – type of adapter.  If connecting to a SCSI controller, you’ll need lsilogic.

-t – type of disk.  If it’s a thick provisioned ESXi / vSphere disk image, this should be 4.

This will create the .vmdk (descriptor) as well as a new -flat.vmdk (the actual disk image). You can copy the descriptor next to your original disk image and discard the new -flat.vmdk file since it’s not really needed.

Categories: vmware Tags: , ,

Mono Applications in Docker Containers

February 20, 2016 Leave a comment

Mono applications can be run in docker containers in several ways, including using base images from Xamarin or Microsoft that contain mono. If you have some specific runtime requirements, you may need to build your own images with mono preinstalled or provided on a data volume when launching the container.

Using mono from host

Because mono applications are often very self-contained, you can create a very slim image and make the mono runtime from the host available to the container when launching. This is convenient for keeping all applications using the same version of the mono runtime, allowing you to upgrade the runtime without modifying the existing images.


from centos
ENV PATH /mnt/mono/bin:$PATH

This short Dockerfile expects the mono runtime will be provided as a data volumefrom the host under the /mnt/mono mount point.

Build with docker build -t external_mono . to get a new docker image based on CentOS named “external_mono”. This image doesn’t contain mono, and instead you should pass mono in as a data volume from the host.

docker run -v /opt/mono:/mnt/mono --rm external_mono mono --version

The command above runs mono --version using the mono that is provided by the host. The host makes /opt/monoavailable as a data volume at /mnt/mono. The --rm option cleans up the container on exit.

As you’ll usually need an application there as well, you can similarly pass the application to the container as another data volume.

docker run -v /opt/mono:/mnt/mono -v /var/lib/myapp:/mnt/myapp --rm external_mono mono /mnt/myapp/MyExecutable.exe

With this approach, you never really build new images, which is very useful for simply sandboxing applications. The drawback is that all containers are running the same version of mono, shared from the host. If you run an orchestration system, such as Apache Mesos, the runtime must be installed on all hosts (Mesos slaves in this case).

Including mono in image

When all the dependencies are included within the image, the docker containers are much easier to move around because there are no additional host dependencies. This also means that containers can run different versions of the mono runtime if needed. The added flexibility comes at a small cost – images contain the runtime, so they are larger.


from centos
RUN rpm --import ""
RUN yum-config-manager --add-repo
RUN yum -y install mono-complete

The image is larger, but nothing is required on the host, and now you can run mono directly in the container without a data volume:

docker run --rm included_mono mono --version

This image can be easily reused with any mono application that need to run on CentOS by attaching a data volume containing the application and then running it:

docker run -v /var/lib/myapp:/mnt/myapp --rm included_mono mono /mnt/myapp/MyExecutable.exe

The next step is to build an application-specific image on top of this image.


from included_mono
COPY MyExecutable.exe /var/lib/MyApplication/

Building with docker build -t mono_with_app . will build an image that contains the application, and you can run it with no data volumes needed:

docker run --rm mono-with-app mono /var/lib/MyApplication/MyExecutable.exe

This option is definitely the simplest to run, and provides you with a base image for building additional images. Each image takes up space, however. Images with base CentOS and no mono are ~200 MB, whereas images with mono included are over 500 MB.

Cross posted on Github.

Categories: Uncategorized Tags: ,

Listing assembly versions on mono

January 13, 2015 3 comments

Need to quickly get a list of all the assembly names and version numbers you are using in your .NET application? Here is a brief command to retrieve this using the mono disassembler:

find . -name '*.dll' -exec monodis --assembly {} \; | grep "Name\|Version" 
Categories: Uncategorized Tags: ,

Launching a ServiceProcess using mono csharp script

December 30, 2014 Leave a comment

When developing service applications that need to run cross platform, you often need some “glue” to make the application behave appropriately on different platforms.  Service applications written in .NET languages derive from ServiceBase so they can be started by the the Windows Service Control Manager.  On Windows, they can be added to the service registry with ‘sc create’ or using installutil.exe.  You can usually get these to run on Unix and Linux using the mono-service executable, however this doesn’t always behave the way you might want when used from the various init systems.  Instead of using mono-service on *nix, you can using the csharp script mechanism in mono to launch the service process and handle signals to interact with your application. Here is an example of a service application:

using System;
using System.ServiceProcess;
using System.Threading;
using System.Threading.Tasks;

namespace ServiceApplication {
	public class App : ServiceBase {

		CancellationTokenSource cts = new CancellationTokenSource();

		protected override void OnStart(string[] args) {
			var token = cts.Token;
			Task.Factory.StartNew(() => {
				while(true) {
					Console.WriteLine("{0} - service is running", DateTime.Now);
			}, token);

		protected override void OnStop() {


All the code above is compatible across platforms and may be installed and run as a Windows Service without modification. Now for a simple script that can run on mono platforms using the ‘csharp’ command to wrap the service an execute and wait for typical Unix signals before shutting down cleanly:


 * Script for launching a Windows ServiceProcess applicaiton
 * on mono as an upstart job.

using System.Reflection;
using System.ServiceProcess;
using ServiceApplication;
using Mono.Unix;
using Mono.Unix.Native;

var service = new App();
var mi = typeof(App).GetMethod("OnStart", BindingFlags.Instance | BindingFlags.NonPublic);
var result = mi.Invoke(service, new object[] { new string[] {} });

Console.WriteLine("Service started.");

var signals = new UnixSignal[]{
    new UnixSignal (Signum.SIGINT),
    new UnixSignal (Signum.SIGTERM),
    new UnixSignal (Signum.SIGQUIT),

var signal = UnixSignal.WaitAny(signals, -1);
Console.WriteLine("Received {0} signal, exiting.", signals[signal].Signum);


All the platform specific code is in this script so your application code remains completely cross platform, and it makes for a very simple upstart script, as an example:

# Copy to /etc/init/monoServiceApp.conf
start on runlevel [2345]
stop on runlevel [06]


chdir /path/to/service/

   csharp ServiceRunner.cs
end script

The job above will execute the ServiceRunner.cs script using the csharp shell command automatically on startup, shutdown, reboot, and when executed manually using upstart commands (i.e. sudo start monoServiceApp). Since it’s a simple script, upstart will track the PID automatically and send SIGTERM when it’s time to exit, which will be caught by the script for a clean call to the Stop() method.

Note that you can do something similar with mono-service, which is more robust and has options that will let it fit with various init systems. However, mileage tends to vary, so a csharp wrapper script can be a nice alternative.

Categories: Uncategorized Tags: , , ,

Creating a Mono 3 RPM on CentOS

July 27, 2013 2 comments

Creating a Mono 3 RPM on CentOS

A quick guide to creating an rpm of mono 3 from source, starting with a CentOS 6.4 minimal using fpm to create the package.

  1. Install prerequisites for building mono 3

    yum -y update
    yum -y install glib2-devel libpng-devel libjpeg-devel giflib-devel libtiff-devel libexif-devel libX11-devel fontconfig-devel gettext make gcc-c++
  2. Download and extract libgdiplus and mono sources

    curl -O
    curl -O
    tar -jxf libgdiplus-2.10.9.tar.bz2
    tar -jxf mono-3.1.2.tar.bz2
  3. Configure, make, and make install

    cd libgdiplus-2.10.9
    ./configure --prefix=/opt/libgdiplus
    # overwrite incompatible libtool script in pixman
    cp libtool pixman/libtool 
    su -c "make install"
    cd mono-3.1.2
    ./configure --prefix=/opt/mono-3.1.2
    su -c "make install"
  4. Install ruby prerequisites for fpm, then fpm itself

    yum -y install ruby ruby-devel rubygems
    gem install fpm
  5. Make sure /etc/hosts contains your host name since it will be used in some fpm defaults.

    vi /etc/hosts
  6. Install rpm tools and generate the RPM

    yum -y install rpm-build
    fpm -s dir -t rpm -n "mono" -v 3.1.2 /opt/mono-3.1.2 /opt/libdgiplus

You probably will want to explore some other fpm options to customize your rpm further, like setting the maintainer, dependencies, or running pre/post install scripts.

Categories: Uncategorized Tags: , , ,

That’s a Weak MongoDB Laundry List

December 26, 2012 Leave a comment

I read an article today with a laundry list of 10 reasons MongoDB didn’t work out for someone. The list didn’t make a lot of sense to me since there were absolutely no details about why MongoDB didn’t work out. No comments allowed on the article either. In my experience over the past two years with MongoDB, these deficiencies don’t really exist, so I thought I’d debunk them, and leave the article open for comments in case there might be more to the author’s story.

MongoDB logging: it logs at –logpath, or the mongod output. You can adjust verbosity, query profiling, etc.

Monitoring: it runs as a Windows service or Linux daemon, and each OS has the ability to monitor either. If you want more application specific monitoring, it’s full of options:

Slow query optimization: .explain() is a very good friend to understand what indexes are being used on a “slow” query. Also, profiling can show you the queries that are performing slowly. A new addition to improving query performance is the touch command, which can keep data & indexes into memory for better performance.

Init scripts: my development team maintains a Javascript file for initializing the database on each deployment. It’s run as a parameter to the mongo client process. I find this to be very flexible, and being Javascript, much nicer for writing imperative logic than in a SQL script.

Graphing: I’m guessing this refers to generating charts and graphs. The lack of built in tools here doesn’t come as a surprise – it’s expected that your application will want to do this. The database stores data and provides a means for querying, and rendering images is something to be done on the client side.

Replication: Replica sets are ridiculously easy to create and maintain. Automatic failover is very reliable and makes replicated MongoDB instances trivial.

Sharding (and rebalancing) strategy: You have to think through sharing, it’s not going to be something you should jump into. I don’t really see this as a MongoDB problem – the same challenges exist no matter how you choose to shard any database. The selection of shard key is crucial for sharding success.

Backups: there are many, many options, here. For my needs, since I’m running with a replica set, it so take a secondary offline and copy the files. This causes no downtime and has no impact on the other members of replica set. There are also options for performing hot backups using mongodump, using OS snapshots, file system snapshots, and so on.

Restoration: the complexity of restoration depends in many ways on the complexity of backup, but if you do a hot backup with mongodump, you can do mongorestore. If you copy from a secondary, take one node offline, copy the data files into place, tell the other nodes to freeze or stepDown using rs.freeze and rs.stepDown, then start the restored node. It will become primary because the other nodes are forced to be secondaries.

50 other things: read the documentation. This goes along with any database solution. MongoDB has a lot of it, and also a very active and helpful community.

Categories: MongoDB Tags:

How to get a MongoDB oplog without a full replica set

September 3, 2012 6 comments

The MongoDB oplog is a rolling log of the most recent operations that occurred in a MongoDB instance. It’s intended for replication so that multiple data nodes can follow the oplog and immediately get updates, however, your application code has access to it as well. This means you can “tail the oplog” and your application code is notified as soon as an insert, update, or delete operation occurs anywhere on the database.  This is very cool stuff, so cool, in fact, that it’s nice to be able to enable it even if you aren’t using a multiple node replica sets. Replica sets provide high availability and disaster recovery, but you might not need that or have the extra server resources for it – maybe you just want real time notification.  Here is how you enable the oplog on a standalone MongoDB instance.

In the /etc/mongodb.conf file on Debian or similar file on RHEL, you should set two options and then restart your MongoDB instance.


Or you can set these at the command line, as you might on Windows:

--replSet rs0 --oplogSize 100

The first setting, “replSet rs0” tells you this will be a replica set node, which will allow you to run “rs.initiate()” from the mongo shell.  Doing so makes this into a single node replica set.  The name “rs0” is complete arbitrary – call it whatever you want.  The next option, “oplogSize 100” limits the oplog to 100 MB.  You can leave this option off and it will default to using 5% of your free disk space.  You’d want to support a huge oplog if your need multiple data nodes to survive an outtage of several hours or days and be able to catch back up without needing a full resync.  However, if you’re running a single node just to get an oplog and real time notifications, you can cap it at 100 MB, or maybe much less.

After setting these options and starting your MongoDB instance, connect to the shell, change to the local database, and run “rs.initiate()”.  You’ll now have an oplog and to tail for real time notifications.

> show dbs
local (empty)
> use local
switched to db local
> rs.initiate()
 "info2" : "no configuration explicitly specified -- making one",
 "me" : "MONGOSERVER:27017",
 "info" : "Config now saved locally. Should come online in about a minute.",
 "ok" : 1
> show collections

Categories: MongoDB