Saturday, December 31, 2011

Ubuntu 10.10 KVM Server Managed by ConVirt

Project Successful, but abandoned in lieu of virt-manager VMM

Findings:  I will abandon use of ConVirt 2.0 in favor of virt-manager.
ConVirt is not for the faint at heart and imho its limitations and challenges overshadow its advantages at this point in time.  Keep watching, though, maybe someday, if they stick with it they will have a world class Opensource Hypervisor!

Pro's of virt-manager over ConVirt 2.0
  • Cleaner interface
  • Lower footprint and easier setup
  • Does everything a small shop needs
  • More options on disk management
  • No need to have a VM running constantly just for running the interface
  • Much more active development
  • Simple server configuration with minimal extra packets and processes running at host level
Cons of virt-manager
  • Need to run from within linux (or Mac via ssh -X with virt-manager installed on server)
    • if you have Linux or Mac anyway, not a big deal - or you could create a VM dedicated to running virt-manager, which is in effect what ConVirt required anyway.
    • you could set it up to tunnel in remotely and run virt-manager via remote x session
  • no gui network setup tools, need to be done in config files or via commands --- for now!
    • simple bridging or NATing is easy though
Pro's of ConVirt 2.0
  • Web interface - log in from anywhere
  • Better statistics
  • live-migration - yes but virt-manager also does this handily now
Cons of ConVirt 2.0
  • No longer seems to be focus on Convirture... now Enterprise Cloud ... will product fade away?
  • Complicated setup
  • Disorganized GUI
  • Few extra real features for the fuss
  • Requires root login to all managed servers

Overview of Tasks
  1. Prepare Server
    • Install Ubuntu 10.10
    • Install ConVirt tools
    • 1.5 - prepare PC to host virtual machines via KVM
  2. Prepare Machine for receiving ConVirt 2.0
    • Use existing computer or virtual machine or prepare virtual machine manually
  3. Install ConVirt 2.0 (CMS)on a separate machine
    • Install ConVirt 2.0 Framework on the above machine, specifically this refers to installing the Convirt Management Server on the machine used to manage the Convirt enabled managed servers
    • 3.5 - setup Convirt to connect to managed servers
  4. Exploring vir-manager VMM as alternative
1)  Prepare Server
Starting with Ubuntu Server 64 bit 10.10.  Roughly following this guide:

  • Install Ubuntu 64bit Server 10.10 using appropriate options and the following:
    • choose to apply security updates automatically
    • choose software to install: 
      • OpenSSH server
      • Virtual Machine host
  • Install ConVirt tools
    • installed packages the document suggested I require
      • sudo apt-get install ssh kvm socat dnsmasq umo-utilities lvm2 expect
      • many but not all of these packets were not yet installed
    • run the convirt-tool script which "creates appropriate public bridges, required scripts and writes a summary of its operations to the /var/cache/convirt/server_info file"
      • enable root account
        • sudo passwd root
        • (could I have done this using a persistent root login via [sudo -i] command?)
      • used wget to download tarball and unpacked it (tar -xzf convirture-tools-2.0.1.tar.gz)
      • message regarding using convirt tools given with command ./convirt-tool -h
        • shows qemu-kvm-0.12.5, Ubuntu 10.10, Kerneel 2.6.34-11, etc
      • ./convirt-tool install_dependencies
        • checks dependencies
        • installs kpartx & python-pexpect
      • brctl show
        • shows bridges, in my case virbr0 is currently setup, of course I will want to setup a br0 or the like which is linked to eth0
      • ./convirt-tool setup
        • this ran through the setup very fast, but 
        • when I ran the brctl show command again I see 
          • br0 attached to eth0
          • virbr0
      • nano /etc/network/interfaces
        • wanted to see the network setup out of curiosity, which looks much like the one I setup on my PC:
        • (loopback stuff, then:)
        • auto eth0
        • iface eth0 inet manual
        • auto br0
        • iface br0 inet dhcp
          • bridge_ports eth0
          • bridge_fd 0
          • bridge_stp off
          • bridge_maxwait 0
      • ifconfig
        • shows br0 with ip address of server and eth0 with no ip address... as expected since it connects through the bridge, also shows the virbr0
        • nothing to do here
      • nano /etc/libvirt/qemu/networks/default.xml
        • shows the virtbr0 interface 
        • nothing to do here
    • Adding to the CMS - done after setting up the CMS as a virtual machine
      • see below
    • VNC setup
      • done, see below
1.5) Prepare PC to also host Virtual machines under KVM
 I decided to also prepare my main personal computer to host virtual machines and be controlled by ConVirt 2.0, that way I can move VM's from the server to my PC and vice-versa.  Generally followed:  
  • sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
    • This installed core packages needed for KVM
  • sudo apt-get install virt-viewer
    • so I can view virtual machine instances on computer outside of ConVirt
  • log out, then back in to affect new user group changes
  • virsh -c qemu:///system list
    • this command shows virtual machines (currently none), but also verifies the install went well
  • sudo apt-get install virt-manager
    • installs GUI tool to manage virtual machines (outside of ConVirt)
    • Works well, setup a 
To Allow Bridging to my Computer, needed to do the following based on:
  • gksudo gedit /etc/network/interfaces
    • allows editing of interfaces to define bridge interface
    • Added the following lines to the file
      • auto eth0
      • iface eth0 inet manual
      • #this line was changed from ...inet dhcp

      • auto br0
      • iface br0 inet dhcp
        • bridge_ports eth0
        • bridge_stp off
        • bridge_fd 0
        • bridge_maxwait 0
  • sudo /etc/init.d/networking restart 
  • This is not working yet!!!  found the following:   *** most helpful article found ****
    • Bridged networking does not work by default, so need to do some further setup
    • sudo apt-get install libcap2-bin
    • sudo setcap cap_net_admin=ei /usr/bin/qemu-system-x86_64   (did not work, hmmm)
    • sudo setcap cap_net_admin=ei /usr/bin/qemu
    • gksudo gedit /etc/security/capability.conf
      • add line
        • cap_net_admin    chenier
  • This was not successful, though bridge br0 is now working properly, the network is not working properly for the main computer.  --- br0 and eth0 are showing same ip address and MAC address. also, the computer says the wired network device is not managed.  Also, running /etc/init.d/networking restart gives message " deprecated because it may not enable again some interfaces... *Reconfiguring network interfaces... RTNETLINK answers: No such process; ssh stop/waiting; ssh start/running, process 3083; ssh stop/waiting; ssh start/running, process 3199"
  • Still problems, tried the cap_net 64 command and it did not throw an error this time
  • also changed /etc/network/interfaces line 
    • from..... iface eth0 inet dhcp
    • to....       iface eth0 inet manual
  • Now when rebooting everything works fine, with the exception that the gui networking device manager does not see or control eth0... is this now a function of capnet???  Actually, probably part of the following two points.
  • no prob, the command ifconfig shows proper connetion of br0 and any new VM I create connects to my DHCP server to get its IP address, so it appears all is well.  
  • Also, now ifconfig shows no IP address for my eth0, just for my br0, but I usnderstand that this is normal and that eth0 automatically gets its traffic through br0.
  • I am a bit concerned about the SSH information given when restarting networking (sudo /etc/init.d/networking restart) still gives the messages "ssh stop/waiting; ssh start/running, process 3083; ssh stop/waiting; ssh start/running, process 3199".  Maybe this is part of the RTNETLINK process and is normal.  I will want to verify this on the server install.
  • In this guide: it states that the dhcdb daemon will need to be stopped and disabled if used (Desktop installs like mine).  It says to do the following:
    • sudo /etc/init.d/dhcdbd stop
      • this supposedly shuts down the service, but the command was not found on my computer and indeed the file /etc/init.d/dhcdbd is non-existent so apparently my version of Linux Mint uses something else.
  • Alternate configuration of /etc/network/interfaces was found at the bottom of this page:
    • /etc/network/interfaces to read:
      • auto eth1
      • iface eth1 inet manual
      • up ip link set eth1 up
      • auto br0
      • iface br0 inet manual
        • bridge_ports eth1
        • bridge_fd 0
        • bridge_hello 2
        • bridge_maxage 12
        • bridge_stp off
    • I have not tried this yet, but suppose it will work also, since I currently have not problems, no need to try here.  The point of my eth0 having no separate IP address listed is normal as eth0 traffic routes through br0.
  • THE COMMAND brctl gives access to control and view bridge functions.  Typing brctl directly gives command options and man brctl give more help :)
  • more on bridging:
  • For advanced bridging info and vlan bridging:
    • discusses configuring libvert networking in the following files
      • /etc/libvirt/qemu/networks/default.xml
        • opening this file on my computer shows the network configuration of my virtual bridge virbr0, which I have not yet used for a virtual machine as I am mostly using the standard bridged network
      • /etc/libvirt/qemu/domain.xml
        • on my computer, this is blank
      • Discusses how to define vlan bridges
      • Shows examples of configuration of domain.xml and /etc/newtork/interfaces files for  creating subinterfaces
    • points to libvirt networking documentation:

2) Prepare Machine for receiving ConVirt 2.0
Decided to create a VM on my desktop for this.  Its creation is straight forward using Virtual Machine Manager.  I created a VM of Linux Mint 11 64bit Gnome, which is akin to my desktop setup.

OK, that did not work, so need to install an earlier version of Ubuntu as Linux mint 11 is based on 11.04 and the script only accounts for versions up to ubuntu 10.10, so maybe I will install an Ubuntu server as a VM on my desktop.... done... selecting only openSSH server for install.....
hmmmm, after trying the step by step convirt install, it failed, maybe they wanted ubuntu desktop instead of server....

Could try with an Ubuntu 10.04 desktop, then use the partner directory..... perhaps should have tried this first!

In the process of doing the above, I discovered that Virtual Machine Manager can connect to Virtual Machines on my server directly.... though right now it seems to have trouble connecting to local storage on the server.... likely some configuration changes needed.  If ConVirt 2.0 seems to be too unstable or difficult to use, maybe vanilla Virtual Machine Manager will do the trick... Guess I don't really have a need for the fancy stuff like live migration anyway...

3) Install ConVirt 2.0 CMS software 
on the VM inside my Linux Mint machine, I will follow the instructions at:
I may later install the CMS directly on my desktop, but don't want to risk messing up my desktop configuration untill I have a bit more experience with ConVirt.

  • did wget for 3 files and untarred the first
  • ran first command of install_dependencies and ran into an error with libc6-xen not being installable.... may either have to manually go through the setup or use another virutual machine of an earlier distribution.
So, starting again on the Ubuntu Server 10.10 x64 VM created above, with only OpenSSH server installed.
  • sudo apt-get install wget sudo
    • this updated the sudo package
  • wget the 3 files
  • tar -xzf convirt-install-2.0.1.tar.gz
    • untars in the home directory
  • sudo ./convirt-install/install/cms/scripts/install_dependencies
    • installs the dependencies
    • entered "convirt" as the mysql root crediential, will change later
  • Setup innodb buffer and memory pool
    • found the my.cnf configturation file in different location than specified in instructions.  Was found in directory /etc/mysql
    • this file states at the top that golbal options are setup in file and user-specific are setup in ~/.my.cnf ... since this machine is only for CMS, I will set it up globally by adding the two lines to  /etc/mysql/my.cnf  in the mysqld section
      • sudo nano /etc/mysql/my.cnf
        • innodb_buffer_pool_size=1G
        • innodb_additional_mem_pool_size=20M
    • sudo /etc/init.d/mysql restart
      • restarts mysql... convirt instructions had typos here
    • untar the CMS tarball... done
    • TurboGears setup gives some errors:
      • EnvironmentError: mysql_config not found
      • Error: installing mysql-python
      • Error: Failed creating Turbogears2 environment
    • Whatever we do now probably will not work without fixing these errors,however
    • setup sql database
    • Run setup convirt ... which throws more errors about dependencies.... maybe they meant to install this on Ubuntu desktop edition, rather than server edition.
    • GRRRRR, getting frustrated

Maybe I will just install the appliance disk.... Now downloading on KVM server
  • Downloaded the server, then unpacked it, but could not start it using instructions on Convirt website, rather completed remote control setup of managed server using Virtual Machine Manager.  
    • tar -jxf convirt-appliance-2.0.1.tbz2
      • unpacked file
    • then created directory /mnt/sotrage/vm_disks
    • moved disk there (c2_appliance.disk.xm)
  • apt-get install chkconfig
    • installed chkconfig, but it did no good as the point was to turn on libvirtd, but it was not installed , however another service must be in its place
  • used this guide to setup remote management over SSH, but did not need to turn on the service libvirtd (must be by another name on ubuntu server)  Could have used other VMM remote management protocols ****
  • opened the machine via Virtual Machine Manager
    • define new storage pool for server connecting directory /mnt/storage/vm_disks to storage pool called vm_disks
    • created new virtual machine by importing existing disk image and defining the server, then starting
    • login as cms with password convirt
    • login with new credentials
    • start convirt
      • cd ~/convirt
      • ./convirt-ctl start
    • Start web browers on another computer and point to
      • http://[ip address of CMS]:8081
      • default credentials is admin:admin
      • change default credentials
3.5) Setup Convirt CMS to connect to managed servers
  • Start web browers on another computer and point to
    • http://[ip address of CMS]:8081
    • default credentials is admin:admin
    • change default credentials 
      • admin
      • my new password
  • Connecting
    • Created new server Pool by right clicking the data center
    • added server by right clicking the pool and adding, including credentials and connection info
    • installed VNC viewer and keys from CMS to server plus opened VNC ports
    • Things are running well, and I can install new machines and configure storage pools, etc.
    • However, I am finding that virt-manager VMM native to linux is now just as powerful and combine it with virsh shell, more so.  
    • ConVirt's formatting and arrangement seems a little disorganized and dated.  For example, settings for new virtual machines are not as well presented as in virt-manager and certainly not close to that of VirtualBox or 

4) Connecting Virtual Machine Manager (virt-manager) on desktop computer to Server
    • installed libcap2-bin package
    • give qemu the inheritable CAP_NET_ADMIN capability as described
    • edited /usr/bin/qemu as suggested
  • Able to now define storage devices, create new virtual machines, etc.
  • Bridge networking works well on the server.

  • Running Virt-manger on server from another computer:  Curiosity based test.... if I was on another (linux) computer and did not have virt-manager installed, could I still manage virtual machines on the server?
    • ssh user@serverip  =to log into server from console on laptop
    • sudo apt-get install virt-manager =to install VMM gui
    • exit =to get out of ssh tunnel
    • ssh -X user@serverip  =to get back in with X shell enabled
    • vir-manager =to open vir-manager..... works well and I can connect to the server and see VMs and settings I applied remotely from virt-manager on my desktop computer.


I will abandon ConVirt 2.0 and utilize Virt-manager.  

Pro's of virt-manager over ConVirt 2.0
  • Cleaner interface
  • Lower footprint and easier setup
  • Does everything a small shop needs
  • More options on disk management
  • No need to have a VM running constantly just for running the interface
  • Much more active development
  • Simple server configuration with minimal extra packets and processes running at host level
Cons of virt-manager
  • Need to run from within linux (or Mac via ssh -X with virt-manager installed on server)
    • if you have Linux or Mac anyway, not a big deal - or you could create a VM dedicated to running virt-manager, which is in effect what ConVirt required anyway.
    • you could set it up to tunnel in remotely and run virt-manager via remote x session
  • no gui network setup tools, need to be done in config files or via commands --- for now!
    • simple bridging or NATing is easy though
Pro's of ConVirt 2.0
  • Web interface - log in from anywhere
  • Better statistics
  • live-migration - not any more as virt-manager does this handily now
Cons of ConVirt 2.0
  • No longer seems to be focus on Convirture... now Enterprise Cloud ... will product fade away?
  • Complicated setup
  • Disorganized GUI
  • Few extra real features for the fuss
  • Requires root login to all managed servers

Tuesday, August 23, 2011

Upgrade to LM11 desktop from W7 laptop =keep

My Win 7 laptop was zippy a couple years ago with 8 Gig Ram, fast single core processor. Win 7 updates and a couple years of use have taken its toll on current speed so that now I struggle sometimes listen to streaming music while entering receipts into a database. Don't get me started about my frustrations about trying to watch Netflix or Hulu and do ANYTHING else. Used to work fine even on dual monitors. On the plus side, I am more productive while doing the anything else, but then again sometimes it is nice to mostly relax while doing a little mundane business accounting in the evening.

So I decided to drop my laptop and move to a faster desktop running Linux Mint 11 (Gnome, 64bit). I already use Linux Mint LXDE on several older laptops at home which were bogging down with Win XP, but now are quite zippy, thanks to LXDE's responsive GUI. I also have a couple desktops for the kids to play on which dual boot XP and Linux Mint 9 (Gnome, 64bit) which I have been very happy with, especially the Linux Mint 9 portion of that.

OK, so down to it.
Standard DVD install. Started off trying to use bios (software) raid after determining Linux Mint does not have Linux RAID options in the setup menu as is available during the Ubuntu Server install process. Maybe I could setup Linux software raid using PartitionMagic or SystemRescueCD, but unsure. I got bogged down in implementation of the bios software raid, specifically in setting up the boot manager, so after thinking about it decided to install to a single hard drive, which will be more energy efficient anyway, then setup regularly scheduled backups using a rsync type product.

Syncing and Bacup:
First needed to get my files from my Win 7 laptop.

Failed Attempt - But only because not mounted locally, so LuckyBackup cannot reach them
start by trying to mount by GVFS

GVFS moutn
install needed programs and add self to fuse group, then log off and on
sudo apt-get install gvfs-bin sudo gpasswd -a [user] fuse
open the samba share in Nautilus via File/Connect to Server... then select Windows share and put in computer info, then unmount and mount from terminal

gvfs-mount smb://[ip address]/share_name

Create script with the above command 

#! /bin/sh
gvfs-mount smb://[ip address]/share_name

You need to logoff and login again for the group to actually change.

this allowed me to mount samba shares, but not to local directory, so lucky backup which does not directly support samba was not able to use these mounts.

Final  Working Solution for Mounting Samba Shares:

temporary local mount created with:
sudo mount -t cifs
sudo mount -t cifs // /home/chenier/LanShare/homeshare_104
sudo mount -t cifs // /home/chenier/LanShare/data_50

created credentials file entries

edited /etc/fstab [sudo gedit etc/fstab] by adding the following lines:
did not work
#samba mount for Win7 laptop using hidden username and password
// /home/chenier/LanShare/data_50 cifs credentials=/home/chenier/.smbcredentials,dmask=777,fmask=777 0 0

this works
#samba mount for Win7 laptop using hidden username and password
// /home/chenier/LanShare/data_50 cifs username=chenier,password=[password],iocharset=utf8,file_mode=0777,dir_mode=0777 0 0

Change Samba Shares for new FreeNAS storage device -- alternately could have used SSH for share
Able to see Samba shares on the FreeNAS storage device via Nautilis, so do the following to mount to permanent location for backup software access

  • mkdir /home/chenier/LanShare/data_20
    • creates the directory used for sharing
  • sudo gedit /etc/fstab
    • comment out the Windows 7 share, no longer needed
    • add the following 2 lines:
    • #samba mount for FreeNAS storage
    • // /home/chenier/LanShare/data_20 cifs username=[user], password=[pass], iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
    • alternates of the above that worked when guests allowed on Samba share are:
      • // /home/chenier/LanShare/data_20 cifs 0 0
      • // /home/chenier/LanShare/data_20 cifs guest,uid=1000,iocharset=uif8,codepage=unicode,unicode 0 0
    • using this last option for now as I am not getting the results I want for file permissions otherwise.
    • save file and exit gedit
  • sudo mount -a
    • remounts everything
Changed network layout so edit one more time:
  • gksudo gedit/etc/fstab
    • change IP address of FreeNAS storage device to the new one
    • saved file
  • sudo mount -a   = to remount everything
  • success
Sharing Directories as NFS
I will use this to access some files on my computer with my KVM servers.  Resource for this:

  • on Desktop
    • sudo apt-get istall nfs-kernel-server   =installs nfs server package, which is not installed by default on Linux Mint... go figure
    • Service nfs-kernel-server status   = showed "nfsd running" message ... if not see above document to fix
    • define shares from the /etc/exports file
      • gksudo gedit /etc/exports
      • add share lines as per documentation included in comments in file or via above reference
    • sudo exportfs -a    
    • sudo exportfs
      • returns exported files information (to verify)
    • Firewall configuration changes if needed (not needed in default setup of Linux Mint)
  • on Server
    • log into server
    • sudo apt-get install nfs-common   =installs nfs-common which was not installed on my default server setup
Playing Amazon Prime Videos

Install OpenProj
Ubuntu/Linux Mint does not yet have link to openproject install via Software manager so use:
sudo dpkg -i openproj_1.4-2.deb

Install ClamTK
This is the front end for ClamAV, installed so I can scan files on removable media for viruses as I have several Windows computers at home.

Other Programs Installed:

  • BibleTime - very nice bible, etc reading program
  • Calibre - ebook management software to sync with and manage ebooks for my Sony reader.... I actually have not tested this yet for syncing, but I can add my titles just fine.  I intend to test next time I want to sync.  Looks like a very nice application with cool features like converting books and sharing with multiple devises.
  • ClamTK - Antivirus front end
  • Dia - Diagram editor -- although, the Google docs program, LucidChart is even better!
  • DigiKam - I use this program for batch renaming my photos to something based on the day it was taken.
  • Filelight - (not necessary) nice light weight disk usage analysis tool, but Disk Usage Analyzer (Baobab 2.32.0) works just as well or better even, so did not need this extra program... though will keep using it on lighter weight installations.
  • FileZilla - Very nice tool for moving files from local to remote locations using SSH and other connection schemes.
  • FreeMind -  mind mapping tool
  • Furious ISO mount - for mounting virtual disks
  • Guake Terminal - terminal access using f12 and more -- very nice!  Set it to launch when I log in.
  • Hamster-applet (or Time Tracker) - Keeps track of time spent on various tasks while on computer
  • KeePass X - nifty password storage program
  • LibreOffice - latest version right now.... installed by adding additional repository - see Ubuntu help sources for how to do this -- do not try adding .deb packages from LibreOffice web site as it is too time consuming and confusing
  • LuckyBackup - very nice rsync based backup utility
  • MySQL stuff:  see my blog on MySQL setup :)
  • OpenProj - see above
  • PuTTy - for those times I want an SSH session outside of terminal (I prefer this for just a couple things)
  • Remmina - Remote desktop application with support for VNC, RDP, SSH and more
  • SweetHome 3D - not a great program, but good for a free layout/design program
  • Vinagre - another remote desktop application, mainly for VNC.... works very nice in Gnome.
  • Xournal - cool program for mixing hand writing and text together... too bad there isn't a cloud version yet or it doesn't work with google docs.
  • Xiphos Bible Guide - another bible study tool

Stuff I tried and didn't like
Either these apps were not for me or did not work well on my version of mint

  • aclock - graphics poor, menu system broken in this Gnome, so hard to close, gets in the way.
  • Krusader - twin panel file management a little too old school for my tastes, plus added a lot of KDE desktop stuff I didn't need otherwise.  I also generally use Filezilla for moving stuff from local to remote locations.
  • Tomboy Notes - very nice program.... just not how I organize my thoughts.

Various References:

Friday, February 4, 2011

MineOS Notes

Minecraft server via MineOS.--- notes

  • Feature Requests
  • Bugs and Support
  • Other Garbage

  • has helpful hints and user error resolutions as well

Minecraft Forum - server (general)

View Screens - for troubleshooting, etc
  • from terminal window
  • [cd]
  • [./]
  • then select screen corresponding to server and press enter
  • (not sure how to exit screen without killing server)


Saving files - permanence of temporary files
  • Files listed in /opt/.filetool.lst must have the following command run to persist past reboot
  • [sudo -b] this permanently saves file - past reboot
  • firewall setting, uservars, etc. This is because these OS and files run from memory. Exception are files running from hard drive like /mnt/sda1/...
  • to change list [sudo nano /opt/.filetool.lst]

To change how much RAM a world will use requires editing of the /usr/games/minecraft/uservars file.

Adjust the DEFAULT_MEM=1024
to match the desired amount of memory in MB.

Save changes with 'sudo -b'
Stop and restart any running worlds.

Automatic Backup
  • [sudo crontab -u tc -e] enters crontab as user tc in edit mode
  • [i] to enter insert mode
  • [0 4 * * * cd /usr/games/minecraft: python /usr/games/minecraft/ backup three] performs the listed commands according to the schedule
  • 0 minute, of every 4th hour, every(*) day, every month, every day of week
  • [esc] escape key to enter command
  • [:wq] then enter for write and quit
  • [crontab -l] list crontabs
  • [sudo -b] this commits the file to permanent storage
Backup - revisited
  • Cronjobs were not running so needed to change directory for cron start command
  • [sudo nano /opt/]
  • change cron start line to the following
  • "/etc/init.d/services/crond start"
  • save and exit
  • [sudo -b] ommand to commit changes to permanent
  • reboot [sudo /home/tc/]
Date - change to correct
  • [date] showed incorrect time zone so to change to proper
  • [sudo nano /mnt/hda1/boot/grub/menu.lst]
  • add the following to the kernel line
  • "tz=EST4EDT,M3.2.0,M11.1.0"
  • save and exit
  • reboot [sudo /home/tc/]

Thursday, January 20, 2011

CloneZilla on Ubuntu 10.10

Install Ubuntu Base
Install Ubuntu Server AMD64 as a virtual machine in VMware ESXi. Take default options and when get to the software selection screen only choose 'OpenSSH Server'

References following are some of the sites I pulled ideas and info from. None of these sites had all the information I needed in one step by step formula plus I setup in virtual environment with 2 network cards.

Network Configuration
When completed type [sudo ifconfig] and discover IP address. The default of getting IP address from DHCP server is acceptable. Write down MAC address and IP address of eth0. I wanted to assign a specif IP address from my router, so I made the entry there then ran the following commands to renew IP address
  • [sudo dhclient -r]
  • [sudo dhclient]
  • or I could have used [sudo /etc/init.d/networking restart
In VMware I had provisioned 2 Ethernet cards. Eth0 is setup to access my main office network using the default setup. Eth1 is setup to manage the DRBL environment as DHCP server. First, I add the following lines to the network setup configuration file.
  • [sudo nano /etc/network/interfaces] then add the following:
    • #eth1 used for DRBL / Clonezilla environment
    • auto eth1
    • iface eth1 inet static
    • address
    • network
    • netmask
    • broadcast
  • ctrl-O to save, ctrl-X to exit
  • [sudo /etc/init.d/networking restart] to restart networking
  • [sudo ifconfig eth1] to verify changes

DHCP Server
Now to setup the DHCP server on eth1
  • [sudo apt-get install dhcp3-server] installs DHCP server
  • [sudo nano /etc/default/dhcp3-server] edits the server config file
  • change interface line to read 'INTERFACES="eth1"'
  • save and exit
Now configure DHCP server to dole out addresses from an address pool for any computer connecting to it.
  • [sudo cp /etc/dhcp3/dhcpd.conf /etc/dhcp3/dhcpd.conf.bak] creates backup of configuration files just in case
  • [sudo nano /etc/dhcp3/dhcpd.conf] opens configuration file for editing
  • uncomment the 'authoritative;' ine
  • add the following lines
  • option subnet-mask;
  • option broadcast-address;
  • option routers;
  • subnet netmask {
  • range;
  • }
  • save and exit
Well, now that I am reading ahead, I find out that Clonezilla automatically installs DHCP server.... oh well.. the above may not have been needed, let's see what happens.

Install DRBL
  • [sudo -i] allows me to stay as super user without retyping sudo each time
  • [nano /etc/apt/sources.list] edit the sources to add the clonezilla repository to it via addition of the following two lines to the file
  • # needed for clonezilla
  • deb drbl stable
  • then save (ctrl-o) and exit (ctrl-x)
  • [wget] downloads the GPG key for the clonezilla source
  • [apt-key add GPG-KEY-DRBL] adds the key
  • [apt-get update] updates the available packages list
  • [apt-get install drbl] installs DRBL
  • [apt-get upgrade] updates all packages
Setup drbl
[/opt/drbl/sbin/drblsrv -i] starts the setup process, then answer the following questions
  • [n] network installation boot images? [n] since I intend to only suck up and spit out pre-configured machines
  • [n] serial console output on client computers?
  • [n] upgrade the OS? [n] since I just did that

Configure Clonezilla
[/opt/drbl/sbin/drblpush -i] starts the clonezilla configuration, then answer the folowing questions
  • defaults used for the first several questions
  • Clonezilla determined (or guessed) correctly that my Internet connection was through eth0
  • for the eth1 DRBL environment, I received a warning that my IP address was a class A or B private network. It wanted me to use the class C private network of 192.168.*.* and warned me that performance of multicasting clone "will be terribly worse!" with current configuration. ... interesting.... I'll leave it alone for now and see what happens or read up a bit more.
  • Collect MAC addresses of clients? No
  • Offer same IP address? No since this is related to last question
  • Initial number in the last digit set? 11
  • number of clients? 20
  • accept setup? y
  • Diskless Linux service? 2, since we are only using DRBL for cloning, not for providing diskless linux services to client computers
  • Clonezilla mode? 1 [box mode] since it uses less server resources and is adequate for cloning
  • default directory to store your images? /clonezilla --since this is easier to remember than /home/partimag
  • pxelinux password? no
  • boot prompt for clients? no
  • graphic background for PXE menu when boot? yes
  • DRBL server as NAT server? no, since clients do not need to access the Internet through the DRBL server... just using for cloning and not for live usage
  • ready to deploy? yes
  • I now get a message that "The config file is saved as /etc/drbl/drblpush.conf. Therefore if you want to run drblpush with the same config again, you may run it as: /opt/drbl/sbin/drblpush -c /etc/drbl/drblpush.conf"
Starting Clonezilla
[sudo /opt/drbl/sbin/dcs] this starts clonezilla
  • 1st screen: select All
  • 2nd screen (switch the mode): select clonezilla-start
  • 3rd screen (mode): select Beginner mode as I am doing normal system cloning
  • 4th screen(clonezilla mode): select select-in-client which allows choosing of save or restore from client.
  • 5th screen: leave default
  • 6th screen: choose poweroff - shut down client when the clone finishes
  • Working... able to PXE boot on computers connected to second NIC
  • Problems... cannot PXE boot computers which don't support x64
  • Resolution... reinstall 32 bit server to be backward compatible with the older computers I need to image
FOG Server
  • Decided to try FOG server as well, so install on Ubuntu server image 32 bit
  • Generally following these two guides but without the GUI:
  • I had a bit of a challenge setting up a seperate lan on eth1 as I did for Clonezilla, so abandoning for now.
  • I think I will even consider setting up FOG on the same network. There are a couple ways of having FOG coexist with your current DHCP server and network. The preferred method is to have your existing DHCP server forward PXE requests to the FOG server. An alternate method is to setup FOG as a DHCP proxy server which pickups on PXE requests.
  • I'll leave this for another project for now, but go back to the more familiar clonezilla with easy dual NIC setup. FOG does look otherwise polished and promising so definitely worth a revisit.

Clonezilla on 32 bit Ubuntu server

    • Again based on Ubuntu 10.10, but 32 bit version.
    • Follow similar steps for Installation as listed above
    • while setting up the DRBL server was also asked which archetecture to install, I chose i486 (instead of i386 or same as DRBL server options)
      • Add second ethernet settings
      • [sudo nano /etc/network/interfaces] then add the following:
        • #eth1 used for DRBL / Clonezilla environment
        • auto eth1
        • iface eth1 inet static
          • address
          • network
          • netmask
          • broadcast
        • ctrl-O to save, ctrl-X to exit
        • [sudo /etc/init.d/networking restart] to restart networking
        • [sudo ifconfig eth1] to verify changes
  • Needed to perform steps under network configuration (above)
  • But did not need to perform steps listed in DHCP server configuration (above), this was setup automatically during the DRBL and Clonezilla server setup

  • Wednesday, January 5, 2011

    Offline Encyclopedia Content - Spanish

    For the last couple years I've been looking for a good source of encyclopedia content in Spanish for Linux with pictures included. In the past I've used downloaded wikipedia dumps, but these did not include photos and did not include smart searching functionality.

    I then stumbled across CDPedia in Spanish. I downloaded CDPedia project and found it worked well on Win XP machines, but did not work on the Ubuntu or Linux Mint machines I had. Additionally, it took days to download the CD version and weeks for the DVD version as there were limited torrent seeders with limited bandwidth.

    Most recently a couple new programs with a lot of promise have shown up: Kiwix and Okawix. Both of these are opensource with free content. Both are relative newcomers.

    Kiwix uses the open source ZIM format. It can be installed in Ubuntu derivative OS's natively from a personal package archive as follows.
    • sudo add-apt-repository ppa:kiwixteam/ppa
    • sudo apt-get update
    • sudo apt-get install kiwix
    • download ZIM file from
    • Open ZIM file after moving to desired location.
    • Before performing first search the system will ask you to index the file which will take a while.
    Kiwix shows a lot of promise and seems to be quite stable and simple to use. Tested version is 0.9 alpha. After opening the Spanish wikipedia for the first time, it reopened the same file then next time it was launched. I am excited to see how this will progress. The one drawback is that there seems to be limited zim files created for Spanish content. There is the Spanish Wikipedia, but none of the other wiki content that I have yet found, but I imagine that is just a matter of time. This is definitely a strong contender for inclusion on my remastered DVD.

    Okawix is similar to Kiwix. I tested version 0.7. You need to download then run its executable. If you want to access this program from the menu you need to manually make make the entries which makes setup more labor intensive and not for the novice. However, adding content is easier than Kiwix since it is literally point and click first language then type of content desired. This is a huge feature and there is much broader Spanish language content available. Also you actively choose whether to include photos or not and the files include indexes so the indexing step of installation is not required. Once a okawix wiki is included (by downloading or linking ot local source) it is included in the list of available corpus and switching between the installed wikis is simply point and click. In my last test, Okawix froze up a couple times over several days for unexplained reasons.

    I'm excited to watch both of these. For Kiwix I like that is installs easily with a menu entry, seems quite stable, and has slightly simpler controls when only a single corpus is desired. For Okawix I like the broader content available. Since Wikipedia is the principal content desired, for now, I will be moving forward with Kiwix and watching Okawix waiting for a little further development.

    Tuesday, January 4, 2011

    Games not chosen for Spanish Distro

    Here is my list of games that had good reviews or seemed promising but I decided not to include in my distribution for Spanish kids. I am including names and reasons so I don't have to revisit my work here again.

    • einstein - logic game quite challenging, but perhaps too much so
    • flight of the amazon queen - very dos like cheesy game
    • freecol - like colonizatioBulleted Listn but was more complicated than other similar turn based strategy games included
    • freedink - all English and not very entertaining
    • micropolis - city sim but complex and no Spanish language
    • singularity - too complex
    • widelands - not the best rts available... currently not polished enough
    • teeworlds - not sure how to setup LAN server otherwise fast, cartoon like multi-player shooter
    • yofrankie - very processor intensive