| 1 comment

HyperVM


HyperVM is a multi-platform, multi-tiered, multi-server, multi-virtualization web based
application that will allow you to create and manage different virtual machines each
based on different technologies across machines and platforms. Currently it supports
OpenVZ and Xen virtualization and is available for RHEL 4/5 as well as CentOS 4 and
CentOS 5.

Installation

You wouldn't need to install OpenVZ in addition to HyperVM in the main node since
HyperVM will also install OpenVZ . Also The slave is needed only if you want to control
OpenVZ containers on remote servers with HyperVM.

Installing A HyperVM Master

The HyperVM master allows you to control OpenVZ containers on the master itself and
on slave machines. Even if you don't want to run slave machines, you need a master.
First step is to disable selinux. For this you can run

#setenforce 0
Afterwards we install HyperVM as follows:

#wget http://download.lxlabs.com/download/hypervm/production/hyper- vminstallmaster.
sh

#sh ./hypervm-install-master.sh --virtualization-type=openvz

Next step is to configure the boot loader. For that you have to edit the
/boot/grub/grub.conf and change the value assigned to default to appropriate one inorder
to make OpenVZ kernel as the default kernel.

Then we reboot the system:
#reboot

Using HyperVM

1. You can use either https://IP_ADDRESS:8887 or http://IP_ADDRESS:8888.
2. Log in with the user admin and the password admin.
3. The first thing you are asked to do after the first login is to change the default
password for admin.
4. The next thing you are asked to do is configure LXguard. LXguard is a tool like
fail2ban or DenyHosts that blocks remote IP addresses from which too many logins
originated (this is to prevent brute-force attacks.
Fill in the maximum number of failed login attempts that are allowed before LXguard
kicks in and blocks the IP.
You should then go to the Whitelist tab and whitelist your own IP

Please check how HyperVM looks:
















1. Creating IP Pools

Before we can create our first OpenVZ container, we need to define an IP pool from
which new containers can take an IP address. Go to Ip Pools. On the Ip Pools page, click
on the Add Ip Pool tab. You have to provide the IP Pool Name, First IP address, Lst
IP address, Resolv Entries and Gateway in the page showing. This is almost similar to
the way through which we add IP pools in solusvm master.

2. Define at least one resource

Besides creating an IP pool, we must also define at least one resource plan before we
can create our first OpenVZ container. On the HyperVM Home, click on Resource
Plans, and then on the Add Resource Plan tab.
Fill in a name and description and then specify the resources for each OpenVZ container
that will use this resource plan

3. Create VM

Now we can create our first OpenVZ container. Click on the Virtual Machines icon in
the Resources section of the HyperVM Home; on the page that loads, click on the Add
Openvz tab
Provide a name for that new OpenVZ container and fill in a root password. Type in a
free IP address from the IP pool that you've created before. Then rovide a hostname,
select the resource plan you've just created and an OS template for the container, then
click on Add.
After a few moments, you should see your new container on the Virtual Machines
overview page. You can start and stop the container by clicking on the bulb in the S
column, but you can as well control it from its own control panel that you can reach by
clicking on the container's name in the VM Name column.
| 1 comment

SolusVM (Solus Virtual Manager)


Solus Virtual Manager (SolusVM)

It is a powerful GUI based VPS management system with full OpenVZ, Linux KVM,
Xen Paravirtualization and Xen HVM support. SolusVM allows you and your clients to
manage a VPS cluster with security & ease. With SolusVM you can easily do the
following:

1. Start, Stop & Reboot your VPS.
2. Monitor your VPS Servers DiskSpace, Memory & Bandwidth Usages.
3. Rebuild the VPS back to it's original state (Fresh).
4. Change the VPS Servers Hostname.
5. Change the VPS root password.
6. Login to the VPS directly via a Serial SSH Console.

Installation

Installing a Master

In SSH as root do the following:

wget http://soluslabs.com/installers/solusvm/install
chmod 755 install
./install

Please note that you can not Install a SolusVM Master on a Xen Slave Directly, you can
however install your Master on a Xen VPS which is hosted on this slave, it just cant be
installed directly onto the slave.

You will now be presented with the following menu as illustrated below:











Select option 1
You will now be presented with the next menu as illustrated below:

If you need to install a master that won't host any virtual servers, select option 2
(recommended)
If you need to install a master that will host virtual servers, select option 1
Once installed go to http://myipaddress:5353/admincp/ and login using username
vpsadmin amd password vpsadmin
Installing a Slave
In SSH as root do the following:
wget http://soluslabs.com/installers/solusvm/install
chmod 755 install
./install
You will now be presented with the following menu as illustrated below:











Select option 2
You will now be presented with the next menu as illustrated below:

















If you need to install a slave that will host OpenVZ virtual servers, select option 1
If you need to install a slave that will host Xen virtual servers, select option 2
If you need to install a slave that already hosts virtual servers, select option 3

Basic actions in solusvm panel

Add a Slave Node

Hover over "Nodes" and click "Add Node".
a) Node Name - Enter a name for your node so that you can recognise it easily. Some use
"Node1" or some use "Server 1", however, you can use whatever you please
b) IP Address - Enter the IP address of the slave server you just made.
c) Hostname - Enter the Hostname of the slave server you just made.
d) SSH Port - Enter the SSH port of the slave server you just made.
e) ID Key - Enter the identification key that the SolusVM slave installer gave you.
f) ID Password - Enter the identification password the SolusVM installer gave you.
g) Country - Enter the country where the server is located.
h) City - Enter the City where the server is located.
i) Add Node -- The management server will now attempt to connect to the slave and
verify any needed data such as OpenVZ installation and the correct variables for the

SolusVM slave installation.

Create a Client

1. Entering the clients information When entering clients information,
a) Reseller : This box is to choose whether or not the client should be owned by a
reseller. If not, then choose Root (No Reseller).
b) First Name : Enter your clients First Name.
c) Last Name : Enter your clients Last Name.
d) Company : Enter your clients company name. If they do not have one, leave it blank
or enter N/A.
e) Email Address : Enter your clients email address.
f) Username : Enter your clients desired username, or a username that you want the
client to have, such as: vps1 or 101 or maybe even 1
g) Password : Enter the requested password for your clients account.
h) Add Client:

Create a Virtual Server

Hover over the "Virtual Servers" menu and click "Create {Type} Virtual Server". Where
{type} is, you would choose the corresponding Virtualization, i.e. OpenVZ, Xen or
HVM.
a) Node : Choose which server you want the VPS to be created on, whether it be
localhost or node99 .
b) Plan : Choose which preset plan (specifications) you would like your new VPS server
to have, if you don't have one setup, You can still continue creating a "Custom" VPS.
Now, click "Continue".
c) Client : Choose which client this VPS is being made for. If you haven't already setup a
client, then you need too.
d) Hostname : Enter the Hostname that your client requested when ordering their VPS.
e) Operating System : Choose which operating system the VPS should have.
f) IP Address : Select the VPS servers main IP address. You can add more IPs after the
VPS has been created. If there is a notice displaying "No Available IP Addresses" then
you need to add an "IP Block", otherwise, you cannot continue.
Now, if you chose a plan on the first stage of creating the VPS then all you have to do
now is enter the Username either asked for by the client or designated by yourself and
then the password, either asked for by the client or designated yourself. Once you have
done that, you can click "Create Virtual Server", if you didn't choose a resource plan,
then read on.
g) Hdd Space: Enter the amount of hard drive space you would like this particular VPS
to have. Remember to type the amount in GB not MB.
h) Guaranteed RAM: Enter the amount of Guaranteed Memory the VPS should have.
Remember to type the amount in MB not GB.
i) Burstable RAM: Enter the amount of Burstable Memory the VPS should have.
Remember to type the amount in MB not GB.
j) Bandwidth: Enter the amount of Bandwidth the VPS should have. Remember to enter
the amount in GB not MB.
k) Create Virtual Server

OpenVZ migration Around SlousVM cluster

On the node that hosts the container run the following commands

wget http://files.soluslabs.com/solusvm/scripts/keyput.sh
chmod a+x keyput.sh
./keyput.sh <destination_node_ip> <destination_node_port>
vzmigrate -v --ssh="-p <destination_node_port>" <destination_node_ip>
<container_id>

When the restore is complete you need to update SolusVM so it knows where the vps has
been moved to.
In SSH on the master run the following command

/scripts/vm-migrate <VSERVERID> <NEWNODEID
| 7 comments

Xen


Xen is a paravirtualization platform which is very similar to hardware emulation.
Paravirtualization works by creating an interface between the virtual environment's
operating system and the hardware which queues and responds to operating system
requests from operating systems modified to interact with the paravirtualization interface.

This key difference from operating system-level virtualization allows Xen VPS
administrators to modify their kernel modules, utilize swap space to meet memory
allocation demands, and watch their Xen virtual private server's boot process as Linux
mounts virtualized devices.














xen virtualization architecture

Key Differences can be summerised as:

Xen Platform
1. Uses more resources
2. Soft memory limit (swap space with performance penalty)
3. Full iptables access
OpenVZ Platform
1. Uses fewer resources
2. Hard memory limit (no swap space)
3. Limited netfilter (iptables) modifications

Xen supported virtualization types

Xen supports running two different types of guests. Xen guests are often called as domUs
(unprivileged domains). Both guest types (PV, HVM) can be used at the same time on a
single Xen system.

Xen Paravirtualization (PV)
Paravirtualization is an efficient and lightweight virtualization technique introduced by
Xen, later adopted also by other virtualization solutions. Paravirtualization doesn't require
virtualization extensions from the host CPU. However paravirtualized guests require
special kernel that is ported to run natively on Xen, so the guests are aware of the
hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen
PV guest kernels exist for Linux, NetBSD, FreeBSD, OpenSolaris and Novell Netware
operating systems. PV guests don't have any kind of virtual emulated hardware, but
graphical console is still possible using guest pvfb (paravirtual framebuffer). PV guest
graphical console can be viewed using VNC client, or Redhat's virt-viewer. There's a
separate VNC server in dom0 for each guest's PVFB.

Xen Full virtualization (HVM)
Fully virtualized aka HVM (Hardware Virtual Machine) guests require CPU
virtualization extensions from the host CPU. Xen uses modified version of Qemu to
emulate full PC hardware, including BIOS, IDE disk controller, VGA graphic adapter,
USB controller, network adapter etc for HVM guests. CPU virtualization extensions are
used to boost performance of the emulation. Fully virtualized guests don't require special
kernel, so for example Windows operating systems can be used as Xen HVM guest. Fully
virtualized guests are usually slower than paravirtualized guests, because of the required
emulation.


Basic Xen Configuration

/etc/xen/xend-config.sxp - The xen-tools package installs an example configuration file
for the xend daemon located at /etc/xen/xend-config.sxp which can be examined and
modified if required. This is the basic configuration file for the xen server. We shall add
the following entries to customize our own configuration.

Logging : The log output of the Xen control daemon.
(logfile /var/log/xen/xend.log) // sets the log path
(loglevel DEBUG) // To ensure that most of the information is displayed.

Communication Protocols : The xend daemon offers a variety of communication
protocols over which the management utilities can connect and thereby control the virtual
machines and perform other management functions.

(xend-http-server no)
(xend-unix-server no)
(xend-tcp-xmlrpc-server no)
(xend-unix-xmlrpc-server no)
(xen-api-server ((unix)))

Relocation Services : One of the more compelling features of the Xen virtualisation
environment is the ability to suspend a guest, migrate it to a different physical host and
then resume it totally transparently to the guest operating system and applications. Since
xend-relocation-server is insecure with the default configuration, we shall disable it untill
we need it.

(xend-relocation-server no)
(xend-relocation-ssl-server no)

Networking : Current versions of Xen offer two distinct methods of connecting guest
domains to the network of the host domain

(network-script network-route)
(vif-script vif-route)

If you have multiple network adaptors installed in the host machine, or you have renamed
a network interface, and wish to specify that this interface is to be used by the route or
bridge scripts you will need to specify the name of the network device using the netdev
option as shown below.

(network-script 'network-route netdev=lan')
(vif-script 'vif-route netdev=lan')

Starting and testing the Xen control daemon
/etc/init.d/xend start

Xen Commands

xm uptime
To show uptime for a vps

xm top
To monitor a host and its domains dynamiccally

xm list
Displays domain information

xm info
Displays host information

xm vcpu-list
Lists domain virtual processors

xm network-list
Lists domain virtual network interfaces

virsh nodeinfo
Returns node information

virsh vcpuinfo
Displays domain virtual processor information

xm log
Shows the xend log

virsh dominfo
Displays domain information

xm dmesg
Reads the xend daemon's message buffer just like dmesg

| No comment yet

OpenVZ


OpenVZ is an operating system-level virtualization platform based on a single Linux
kernel which has been modified to support multiple Linux virtual environments (more
commonly referred to as virtual private servers).
The modified OpenVZ kernel isolates the file system, memory, and processes for each
virtual environment, providing OpenVZ VPS administrators with full root access and all
of the commands normally associated with a dedicated server.
OpenVZ allows a physical server to run multiple isolated operating system instances,
known as containers, Virtual Private Servers (VPSs), or Virtual Environments (VEs).
As compared to virtual machines such as VMware and paravirtualization technologies
like Xen, OpenVZ is limited in that it requires both the host and guest OS to be Linux
















OpenVZ Architecture


OpenVZ installation

OpenVZ consists of a kernel, user-level tools, and container templates.
Install using yum:
yum pre-setup
If you want to use yum, you should set up OpenVZ yum repository first.
Download openvz.repo file and put it to your /etc/yum.repos.d/ repository. This can be
achieved by the following commands, as root:
# cd /etc/yum.repos.d
# wget http://download.openvz.org/openvz.repo
# rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ
Kernel installation
# yum install ovzkernel[-flavor]
eg : yum install ovzkernel.x86_64
Configuring the bootloader
In case GRUB is used as the boot loader, it will be configured automatically: lines similar
to these will be added to the /boot/grub/grub.conf file:
title Fedora Core (2.6.8-022stab029.1)
root (hd0,0)
kernel /vmlinuz-2.6.8-022stab029.1 ro root=/dev/sda5 quiet rhgb vga=0x31B
initrd /initrd-2.6.8-022stab029.1.img

Configuring

Please make sure that the following steps are performed before rebooting into OpenVZ
kernel.
sysctl
There are a number of kernel parameters that should be set for OpenVZ to work
correctly. These parameters are stored in /etc/sysctl.conf file. Here are the relevant
portions of the file; please edit accordingly.
# On Hardware Node we generally need
# packet forwarding enabled and proxy arp disabled
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.proxy_arp = 0
# Enables source route verification
net.ipv4.conf.all.rp_filter = 1
# Enables the magic-sysrq key
kernel.sysrq = 1
# We do not want all our interfaces to send redirects
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.all.send_redirects = 0

SELinux

SELinux should be disabled. To that effect, put the following line to
/etc/sysconfig/selinux:
SELINUX=disabled
Conntracks
To enable connection tracks, add the following line to /etc/modprobe.conf file:
options ip_conntrack ip_conntrack_enable_ve0=1
Rebooting into OpenVZ kernel
Now reboot the machine and choose "OpenVZ" on the boot loader menu.

Installing the utilities

OpenVZ needs some user-level tools installed. Those are:
Using yum
# yum install vzctl vzquota
If on the x86_64 platform you would probably want to:
# yum install vzctl.x86_64 vzquota.x86_64

Starting OpenVZ

As root, execute the following command:
# /sbin/service vz start

OpenVZ Default Locations

1. /vz - Main directory for OpenVZ.
2. /vz/private/ - Each VPS is stored here i.e. container's private directories
3. /vz/template/cache - You must download and store each Linux distribution template
here.
4. /etc/vz/ - OpenVZ configuration directory.
5. /etc/vz/vz.conf - Main OpenVZ configuration file.
6. /etc/vz/conf - Softlinked directory for each VPS configuration.
7. Network port - No network ports are opened by OpenVZ kernel.

UBC, or User Beancounters

A set of limits and guarantees controlled per container. UBC is the major component of
OpenVZ resource management. We define the UBC parameters in the file
/proc/user_beamcounters

->Primary parameters
numproc : Maximum number of processes and kernel-level threads allowed for this
container.
numtcpsock : Maximum number of TCP sockets
numothersock : Maximum number of non-TCP sockets (local sockets, UDP and other
types of sockets)
vmguarpages : Memory allocation guarantee. This parameter controls how much
memory is available to a VE. The barrier is the amount of memory that VE's applications
are guaranteed to be able to allocate

->secondary parameters
kmemsize : Size of unswappable memory in bytes, allocated by the operating system
kernel.
tcpsndbuf : The total size of buffers used to send data over TCP network connections
tcprcvbuf : The total size of buffers used to temporary store the data coming from TCP
network connections
othersockbuf : The total size of buffers used by local (UNIX-domain) connections
between processes inside the system
dgramrcvbuf : The total size of buffers used to temporary store the incoming packets of
UDP and other datagram protocols
oomguarpages : Guarantees against OOM kill. Under this beancounter the kernel
accounts the total amount of memory and swap space used by the VE processes. The
barrier of this parameter is the out-of-memory guarantee
privvmpages : The barrier and the limit of this parameter control the upper boundary of
the total size of allocated memory

->Auxiliary parameters
lockedpages : Maximum number of pages acquired by mlock
shmpages : Maximum IPC SHM segment size. Setting the barrier and the limit to
different values does not make practical sense
physpages : This is currently an accounting-only parameter. It shows the usage of RAM
by this VE
numfile : Maximum number of open files
numflock : Maximum number of file locks. Safety gap should be between barrier and
limit
numpty : Number of pseudo-terminals (PTY)
numsiginfo : Number of siginfo structures
dcachesize : Maximum size of filesystem-related caches, such as directory entry and
inode caches
numiptent : Number of iptables (netfilter) entries
swappages : The amount of swap space to show in container.

OpenVZ commands

vzctl create VEID --ostemplate ubuntu-9.04-x86_64
The OS template should be saved in /vz/template/cache/ folder

vzlist -a
Shows list of all the VPS's hosted on the Node.

vzctl start VEID
To start the VPS

vzctl stop VEID
To stop (Shut Down) the VPS

vzctl status VEID
To view the status of the particular VPS

vzctl stop VEID --fast
To stop the VPS quickly and forcefully

vzctl enter VEID
To enter in a particular VPS

vzctl set VEID --hostname vps.domain.com --save
To set the Hostname of a VPS

vzctl set VEID --ipadd 1.2.3.4 --save
To add a new IP to the hosting VPS

vzctl set VEID --ipdel 1.2.3.4 --save
To delete the IP from VPS

vzctl set VEID --userpasswd root:new_password --save
To reset root password of a VPS

vzctl set VEID --nameserver 1.2.3.4 --save
To add the nameserver IP's to the VPS

vzctl exec VEID command
To run any command on a VPS from Node

vzyum VEID install package_name
To install any package/Software on a VPS from Node

vzctl set VEID --onboot yes --save
To make sure VPS boots automatically after a reboot.

vzlist -o vpsid,laverage
To display the load average in all the containers with container ID.
| No comment yet

Virtualisation


Virtualization

It is creation of a virtual rather than actual version of something,such as a hardware
platform, operating system, a storage device or network resources.
For example an operating system virtualization is the use of software to allow a piece of
hardware to run multiple operating system images at the same time. The technology got
its start on mainframes decades ago, allowing administrators to avoid wasting expensive
processing power.

Types of virtualization

1. Hardware virtualization
Creation of a virtual machine that acts like a real computer with an operating system.
Software executed on these virtual machines is separated from the underlying hardware
resources.
In hardware virtualization, the term host machine refers to the actual machine on which
the virtualization takes place; the term guest machine, however, refers to the virtual
machine. Likewise, the adjectives host and guest are used to help distinguish the software
that runs on the actual machine from the software that runs on the virtual machine. The
software or firmware that creates a virtual machine on the host hardware is called a
hypervisor or Virtual Machine Monitor.
Different types of hardware virtualization include:
Full virtualization : Almost complete simulation of the actual hardware to allow
software, which typically consists of a guest operating system, to run unmodified
Partial virtualization : Some but not all of the target environment is simulated. Some
guest programs, therefore, may need modifications to run in this virtual environment.
Paravirtualization : A hardware environment is not simulated, however, the guest
programs are executed in their own isolated domains, as if they are running on a separate
system. Guest programs need to be specifically modified to run in this environment.
Improving the efficiency of hardware virtualization is known as Hardware-assisted
virtualization.

2. Software virtualization
It includes Operating system-level virtualization which is hosting of multiple
virtualized environments within a single OS instance and Application virtualization
and Workspace virtualization, the hosting of individual applications in an environment
separated from the underlying OS

3. Memory virtualization
Aggregating RAM resources from networked systems into a single memory pool. Virtual
memory, giving an application program the impression that it has contiguous working
memory, isolating it from the underlying physical memory implementation

4. Data virtualization
Data virtualization is the presentation of data as an abstract layer, independent of
underlying database systems, structures and storage
Database virtualization, the decoupling of the database layer, which lies between the
storage and application layers within the application stack

5. Storage virtualization
Storage virtualization is the process of completely abstracting logical storage from
physical storage.
Distributed file system : Any file system that allows access to files from multiple hosts
sharing via a computer network.[1] This makes it possible for multiple users on multiple
machines to share files and storage resources.

6. Network virtualization
Creation of a virtualized network addressing space within or across network subnets.
Desktop virtualization is a concept of separating a desktop environment from its physical
computer (and its associated operating system) and storing it on another machine across a
network, such as a center server. Thin clients employ desktop virtualization.

| No comment yet

lamp