Change cluster node IP in Proxmox

To update the present cluster host proxmox following files need to be updated:

/etc/network/interfaces
/etc/hosts
/etc/pve/corosync.conf (only on one node necessary)

However, corosync.conf needs special way to edit the file!

Edit corosync.conf

Editing the corosync.conf file is not always very straightforward. There are two on each cluster node, one in /etc/pve/corosync.conf and the other in /etc/corosync/corosync.conf. Editing the one in our cluster file system will propagate the changes to the local one, but not vice versa. The configuration will get updated automatically as soon as the file changes. This means changes which can be integrated in a running corosync will take effect immediately. So you should always make a copy and edit that instead, to avoid triggering some unwanted changes by an in-between safe.

cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new

Then open the config file with your favorite editor, nano and vim.tiny are preinstalled on any Proxmox VE node for example.

Read more

Share

Install Qemu Guest Agent on Proxmox

The qemu-guest-agent is a helper daemon, which is installed in the guest. It is used to exchange information between the host and guest, and to execute command in the guest.

In Proxmox VE, the qemu-guest-agent is used for mainly two things:

  • To properly shutdown the guest, instead of relying on ACPI commands or windows policies
  • To freeze the guest file system when making a backup (on windows, use the volume shadow copy service VSS).

Installation Host
You have to enable the guest-agent per VM, either set it in the GUI to “Yes” under options (see screenshot):

or via CLI:

qm set VMID --agent 1

Guest Linux
On Linux you have to simply install the qemu-guest-agent, please refer to the documentation of your system.

We show here the commands for Debian/Ubuntu and Redhat based systems:

Read more

Share

Install Proxmox VE 6 on Debian 10 (Buster)

Proxmox Virtual Environment (VE) is an enterprise-grade open-source server virtualization solution based on Debian Linux distribution with a modified Ubuntu LTS kernel. It allows you to deploy and manage both virtual machines and containers.

This setup presumes you have a running Debian 10 Buster Linux server running. If you don’t have one, follow our guide to Install Debian 10 on a dedicated server that will be used as a hypervisor. Please note that you need a 64-bit processor with support for the Intel 64 or AMD64 CPU extensions.

Below are the steps you’ll follow through to install Proxmox VE 6 on Debian 10 (Buster).

Step 1: Update Debian OS

Update apt package index before getting started.

sudo apt -y update
sudo apt -y upgrade
sudo reboot

Step 2: Set system hostname

We need to set the hostname and make sure it is resolvable via /etc/hosts.

sudo hostnamectl set-hostname prox6node01.example.com --static
echo "10.1.1.10 prox6node01.example.com prox6node01" | sudo tee -a /etc/hosts

example.com should be replaced with a valid domain name.

Read more

Share

Converting OVA for use with KVM / QCOW2

The OVA file is nothing more than a TAR archive, containing the .OVF and .VMDK files. Easy!

Using Evergreen ILS for example:

~ $ file Evergreen_trunk_Squeeze.ova

Evergreen_trunk_Squeeze.ova: POSIX tar archive (GNU). I’ts possible to use the tar command to list the contents

~ $ tar -tf Evergreen_trunk_Squeeze.ova 
Evergreen_trunk_Squeeze.ovf
Evergreen_trunk_Squeeze-disk1.vmdk

Simply extract those things…

~ $ tar -xvf Evergreen_trunk_Squeeze.ova
Evergreen_trunk_Squeeze.ovf
Evergreen_trunk_Squeeze-disk1.vmdk

Read more

Share

Install Proxmox VE on Debian 9 – Stretch

The installation of a supported Proxmox VE server should be done via Bare-metal_ISO_Installer. In some case it makes sense to install Proxmox VE on top of a running Debian Stretch 64-bit, especially if you want a custom partition layout. For this HowTO the following Debian Stretch ISO was used:

Install a standard Debian Stretch (amd64)

Install a standard Debian Stretch, for details see Debian, and select a fixed IP. It is recommended to only install the “standard” package selection and nothing else, as Proxmox VE brings its own packages for qemu, lxc.

Add an /etc/hosts entry for your IP address
Please make sure that your hostname is resolvable via /etc/hosts, i.e you need an entry in /etc/hosts which assigns an IPv4 address to that hostname.

Note: Make sure that no IPv6 address for your hostname is specified in `/etc/hosts`

For instance if your IP address is 192.168.15.77, and your hostname prox4m1, then your /etc/hosts file should look like:

Read more

Share

Proxmox User Management- Proxmox VE authentication server

Command Line Tool

Most users will simply use the GUI to manage users. But there is also a full featured command line tool called pveum (short for “Proxmox VE User Manager”). Please note that all Proxmox VE command line tools are wrappers around the API, so you can also access those function through the REST API.
Here are some simple usage examples. To show help type:

pveum

or (to show detailed help about a specific command)

pveum help useradd

Create a new user:

pveum useradd testuser@pve -comment "Just a test"

Set or Change the password (not all realms support that):

pveum passwd testuser@pve

Disable a user:

pveum usermod testuser@pve -enable 0

Create a new group:

Read more

Share

Convert .ova and import it on Proxmox KVM virtualization

Let’s start uploading the exported ova file to the proxmox server. Extract the OVA file:

tar -xvf *.ova

This should output a couple of files from the OVA container, it should include an OVF file, which is the VM Defenition file, and a VMDK file, which is the actual hard disk image. Again, this may take a while.

Convert the vmdk to a Proxmox compatible qcow2 file:

qemu-img convert -f vmdk myvirtual-disk1.vmdk  -O qcow2 qcowdisk.qcow2

We now need to get the image into a VM with some hardware so that we can begin to use it. This is where things get tricky – the OVF file is not compatible with Proxmox and needs to be manually understood. The principle here is we are going to use the Proxmox web GUI to create a VM and replace the empty disk image which is created with our recently converted qcow2 image.

You can use vi to open the OVF file and understand some of the basic settings which are required for the VM. Open the OVF file and look for the following XML tags:

  • OperatingSystemSection
  • VirtualHardwareSection
  • Network
  • StorageControllers

Read more

Share

Fix on connecting to NFS server from Proxmox Centos 7/Debian Container

I hope you already know how to allow NFS from proxmox host server. if not, you may read my earlier post:

NFS fix on LXC Host Server

The fix works for Proxmox 4.x

I was actually receiving a error like below:

# mount -t nfsd nfsd /proc/fs/nfsd
mount: nfsd is write-protected, mounting read-only
mount: cannot mount nfsd read-only

My proxmox edition was 5.0-30 and my CentOS was 7.

However, this is a bit different rather looking the other one as mentioned above. I was experiencing connecting my Centos 7 LXC container to a NFS server in the network. The regular tweak didn’t work. So, had to spend a while googling the solution. Found the correct one on a forum thread. But eventually it worked. For this you need to edit the file

nano /etc/pve/lxc/<your container ID>.conf

Add the below line in the conf file:

lxc.aa_profile: unconfined

Reboot the container. And now try to connect the NFS server. It should work.

For Proxmox 5 a little re-worked edition:

First run

cp /etc/apparmor.d/lxc/lxc-default-cgns /etc/apparmor.d/lxc/lxc-default-with-nfs

Then edit the new file /etc/apparmor.d/lxc/lxc-default-with-nfs:
replace profile lxc-container-default-cgns by profile lxc-default-with-nfs put the NFS configuration (see below) just before the closing bracket (})

NFS configuration

mount fstype=nfs*,
mount fstype=rpc_pipefs,

or (being more explicit)

mount fstype=nfs,
mount fstype=nfs4,
mount fstype=nfsd,
mount fstype=rpc_pipefs,

and finally run

service apparmor reload

Use the new profile (Earlier to PVE 6.x)
Edit /etc/pve/lxc/${container_id}.conf and append this line:

lxc.apparmor.profile: lxc-container-default-with-nfs

Use the new profile (Earlier to PVE 6.x)
Edit /etc/pve/lxc/${container_id}.conf and append this line:

lxc.apparmor.profile: lxc-default-with-nfs

Then stop the container and start it again, e.g. like this:

pct stop ${container_id} && pct start ${container_id}

Now mounting NFS shares should work.

Share

Enable VNC viewer for Proxmox 2.x/3.x with tightvnc

Configure Proxmox host for TLS connections: This configures the host to accept VNC connections.

aptitude install openbsd-inetd

Run this to get your KVM id’s :

qm list
root@homenet-home10 /etc # qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID 
101 freenas stopped 1024 32.00 0 
102 debpbx running 512 0.00 573304 
105 winxp stopped 512 15.01 0 
7012 ltsp-ldap-openfire-KVM running 512 9.00 495870 
7016 fbc16-kvm running 512 8.00 462697 
7159 win7 stopped 2048 0.00 0 
27014 ltsp-term-KVM stopped 512 0.00 0

edit /etc/inetd.conf , put a port for each kvm you want to access using kvm

#port kvm
59055 stream tcp nowait root /usr/sbin/qm qm vncproxy 105
59058 stream tcp nowait root /usr/sbin/qm qm vncproxy 7159

restart openbsd-inetd

Read more

Share