Control your cloud from node.js

There was no node.js module implementing Onapp API, so I realized that it should be a good idea to implement it for my first npm published module. 🙂

There are still a lot of methods to implement, but the basic stuff is there. When i have time i will implement more.

Module’s structure is similar to other node client implementations out there, it is very readable.

node.js community is awesome and is certainly something I’m going to be doing more often.

Check it out at: https://github.com/apocas/node-onapp

Installation, as usual, is done using npm awesomeness

npm install onapp

In order to started you need to instantiate a client.

var onapp = require('onapp');

var config = {
 username: 'username@email.com',
 apiKey: 'api_hash',
 serverUrl: 'http://192.168.1.1'
};

var client = onapp.createClient(config);

The options passed during VM creation, are exactly accordingly to Onapp API. This way you create a VM like you were using the original API.

var options = {
  memory: '1024',
  cpus: '1',
  cpu_shares: '50',
  hostname: 'tests.tests.com',
  label: 'VM from node',
  primary_disk_size: '10',
  swap_disk_size: '1',
  primary_network_id: '2',
  template_id: '6',
  hypervisor_id: 2,
  initial_root_password: '12345675',
  rate_limit: 'none'
};

client.createVirtualMachine(options, function (err, vm) {
  if(err !== null) {
    console.log(err);
  } else {
    console.log(vm);
  }
});

Powering off a VM.

client.getVirtualMachine('vm_id', function (err, vm) {
  if(err !== null) {
    console.log(err);
  } else {
    vm.off(function(error, data){});
    //vm.reboot(function(error, data){});
    //...
  }
});
Advertisements

How to add a new data store to a Onapp cloud (iscsi & multipathd)

Here goes another one of those “post to remember later” posts 🙂

If you want to add a new data store to Onapp, first obviously you need to create a new LUN at your SAN and make sure it is visible.

1. Create the data store

  • Add a new datastore in the cloud control panel: Settings->Data stores->Add new data store
  • After this Onapp will create a identifier for the new data store, usually is something like “onapp-[string]”. Write down this identifier you will need it.

2. Create the volume group

  • Login to a hypervisor and run “iscsiadm -m node -R”
  • You should have a new storage device, just do a “multipath -ll” to make sure everything is ok.
  • Create a new physical volume with: “pvcreate –metadatasize 50M /dev/mapper/[the new device]”
  • Now create a new volume group with : “vgcreate [data store identifier from Onapp] /dev/mapper/[the new device]” make sure you dont use the wrong device.

3. Rescan iscsi in all hypervisors

  • Run “iscsiadm -m node -R” and “pvscan” in all hypervisors.

4. Add the data store

  • Add the new data store to the desired hypervisor zone in Onapp. Settings->Hypervisor Zones->Click on a zone->Manage Data Stores

As usual here comes the usual disclaimer: this worked at the time i wrote this, it may not work in the future.

Onapp hypervisor installation

These instructions take in account that your control panel (with ip 10.40.1.1 in this example) is where templates and backups are stored, if this is not the case adjust them accordingly.

Before starting with the hypervisor installation, make sure you can reach your SAN/Volumes and Onapp management network.

  1. echo >> /etc/fstab ‘10.40.1.1:/onapp/backups /onapp/backups nfs soft,noatime,intr,tcp,rsize=32768,wsize=32768 0 0’
  2. echo >> /etc/fstab ‘10.40.1.1:/onapp/templates /onapp/templates nfs soft,noatime,intr,tcp,rsize=32768,wsize=32768 0 0’
  3. Go to 10.40.1.1 and edit the file /etc/exports in order to allow nfs access from the hypervisor ip you are installing. Restart nfs by doing /etc/init.d/nfs restart
  4. mkdir -p  /onapp/backups
  5. mkdir -p /onapp/templates
  6. wget http://rpm.repo.onapp.com/repo/centos/5/x86_64/OnApp-x86_64.repo -O /etc/yum.repos.d/OnApp-x86_64.repo
  7. yum install onapp-hv-install
  8. /onapp/onapp-hv-install/onapp-hv-xen-install.sh or /onapp/onapp-hv-install/onapp-hv-kvm-install.sh
  9. /onapp/onapp-hv-install/onapp-hv-config.sh -h 10.40.1.1
  10. nohup ruby /onapp/tools/vmon.rb > /dev/null &
  11. nohup ruby /onapp/tools/stats.rb > /dev/null &
  12. mount -a (in order to make sure everything is alright with your fstab)
  13. reboot
  14. Go to your cloud control painel and add the hypervisor to the desired hypervisor zone.

These instructions worked at the time i wrote this, in the future they may not work.

LVM Block device filter with multipathd and lvm

If you use a multipathd & lvm setup with a ACTIVE-PASSIVE redundant SAN (ex: EMC VNX) and if you use both SPs (storage processors) in the multipathd for additional redundancy you will get I/O errors in some block devices.

This happens because LUNs will only be announced by the passive SP in case of failure, this provokes I/O errors if you run a pvscan or vgscan since theres no LUNs being announced in the passive SP block devices.

Adding a filter in /etc/lvm/lvm.conf like this: filter = [ “r/disk/”, “r/sd.*/”, “a/.*/” ]

Using this filter lvm will not use sd* block devices, using only mpath* devices which are the ones you want, removing the I/O errors from the logs.