enable SNMP on Xen (XCP-NG) hypervisors

There are six steps to correctly configuring SNMP on your Citrix Xen hypervisor. These steps don’t require a system restart and are non-service affecting.

To start, we assume you’re running Xen v6.x or v7.x, and are logged into the Xen CLI as root.

1. Enable the SNMP daemon

Enable the snmpd daemon by typing-

chkconfig snmpd

2. Configure the SNMP service

Make a backup of the snmpd.conf file. The default snmpd.conf file contains a lot of useful documentation for more advanced implementations of SNMP.

# cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.backup

Continue reading “enable SNMP on Xen (XCP-NG) hypervisors” »

Share

Force Shutdown Xen VM

Instructions

  1. Disable High Availability (HA) so you don’t run into issues
  2. Log into the Xenserver host that is running your VM with issues via ssh or console via XenCenter
  3. Run the following command to list VMs and their UUIDs
    xe vm-list resident-on=<uuid_of_host>
  4. First you can try just the normal shutdown command with force
    xe vm-shutdown uuid=<UUID from step 3> force=true
  5. If that just hangs, use CONTROL+C to kill it off and try to reset the power state.  The force is required on this command
    xe vm-reset-powerstate uuid=<UUID from step 3> force=true
  6. If the VM is still not shutdown, we may need to destroy the domain
  7. Run this command to get the domain id of the VM.  It is the number in the first row of output. The list will be the VMs on the host.  Dom0 will be the host itself and all numbers after are running VM
    list_domains
  8. Now run this command using the domain ID from the output of step 7
    xl destroy <DOMID from step 7>
Share

Force Stop Proxmox LXC

First try to unlock the pct (assuming your troubled container is 101):

pct unlock 101

if it works just stop and start again the vm. if it does’nt work (my case) try to stop with this

lxc-stop --name 101

if it’s does’nt work (my case) you can force stop with kill command

ps ax | grep lxc

then kill the process with your id (101 for me) kill pid (replace pid by the process name). After that you can just launch again you’r vm

Share

Remove Proxmox Subscription Notice

Copy and paste following command to the terminal

(6.1 and up)

sed -i.backup "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service

(6.2-11 and up)

sed -i.backup -z "s/res === null || res === undefined || \!res || res\n\t\t\t.false/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service

(6.2-12 and up) Continue reading “Remove Proxmox Subscription Notice” »

Share

Virtualbox fixing VRDE on 0.0.0.0 instead 127.0.0.1

By default, Remote Display only works on localhost / 127.0.0.1 and cannot be accessed by ip address or hostname.

Check VRDE / Remote Display IP Address
You can check VRDE / Remote Display ip address using the following methods:

Open command prompt and run netstat -an |find /i “listening” or netstat -an |find /i “[PORT_NUMBER]” and you shall notice it is listening on 127.0.0.1:PORT. Continue reading “Virtualbox fixing VRDE on 0.0.0.0 instead 127.0.0.1” »

Share

Remove Node from Proxmox Cluster

Step 1 : Migrate all VMs to another active node

Migrate all VMs to another active node. You can use the live migration feature if you have a shared storage or offline migration if you only have local storage.

Step 2 : Display all active nodes

Display all active nodes in order identify the name of the node you want to remove Continue reading “Remove Node from Proxmox Cluster” »

Share

Fix Proxmox (PVE) “can’t lock file ‘/var/lock/qemu-server/lock-xxx.conf’ – got timeout” (Proxmox can’t shutdown/stop virtual machine) (Proxmox kill/force stop virtual machine)

The Issue

When trying to “Stop” or “Shutdown” virtual machine from Proxmox (PVE) web gui, the “Cluster log” shows

end task UPID:pve:xxxxxxxx:xxxxxxxx:xxxxxxx:qmstop:xxx:root@pam: can’t lock file ‘/var/lock/qemu-server/lock-xxx.conf’ -got timeout
end task UPID:pve:xxxxxxxx:xxxxxxxx:xxxxxxx:qmreboot:xxx:root@pam: VM quit/powerdown failed

The Fix

We can manually delete the lock from following path

/run/lock/qemu-server
# The file will be
/run/lock/qemu-server/lock-100.conf
/run/lock/qemu-server/lock-102.conf
...

Make sure only delete the correct one!

You can also do it using script from this site-

https://dannyda.com/2020/05/11/how-to-fix-proxmox-pve-cant-lock-file-var-lock-qemu-server-lock-xxx-conf-got-timeout-proxmox-cant-shutdown-virtual-machine/

Share

Reclaim disk space from a sparse image file qcow2/ vmdk

Sparse disk image formats such as qcow2 only consume the physical disk space which they need. For example, if a guest is given a qcow2 image with a size of 100GB but has only written to 10GB then only 10GB of physical disk space will be used. There is some slight overhead associated, so the above example may not be strictly true, but you get the idea.

Sparse disk image files allow you to over allocate virtual disk space – this means that you could allocate 5 virtual machines 100GB of disk space, even if you only have 300GB of physical disk space. If all the guests need 100% of their 100GB disk space then you will have a problem. If you use over allocation of disk space you will need to monitor the physical disk usage very carefully.

There is another problem with sparse disk formats, they don’t automatically shrink. Let’s say you fill 100GB of a sparse disk (we know this will roughly consume 100GB of physical disk space) and then delete some files so that you are only using 50GB. The physical disk space used should be 50GB, right? Wrong. Because the disk image doesn’t shrink, it will always be 100GB on the file system even if the guest is now using less. The below steps will detail how to get round this issue. Continue reading “Reclaim disk space from a sparse image file qcow2/ vmdk” »

Share

Proxmox Cloud-Init OS template creation

Introduction

In this guide we will go over creating a Proxmox KVM Template from a Cloud Image. This same process will work for any Cloud-Init Openstack based image type you can find online.

Having done a number of these for our Proxmox based VPS service I wanted to post up a guide to help anyone else looking to do the same thing.

My workflow for customizing one of those for use with Proxmox with cloud-init deployment from WHMCS and root login is below. Once you setup one template you can rapidly reinstall new containers and test stuff.

Setup Environment

If not installed already installed you will need libguestfs-tools :

apt-get install libguestfs-tools

To edit the image before importing. We will use virt-edit which is a part of libguestfs-tools. Continue reading “Proxmox Cloud-Init OS template creation” »

Share

Fixing Slow Windows VM boot on Proxmox KVM with balloon driver

Download the Win-Virtio Driver and load it on VM CDRom Drive. Download can be found here:

https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers

Now install the Virtio Balloon driver AND the Balloon service in the guest as follows:

  1. Open Device Manager and see if there is an unknown PCI device. If so, right click it and install the driver manually from D:\Balloon\2K16\amd64 (or 2k12, 2k8, etc)
  2. Now copy the entire amd64 folder into C:\Program Files\ (NOT x86) and rename it “Balloon”. So, now you have the amd64 folder from the disc copied as C:\Program Files\Balloon
  3. Open an Administrative Command Prompt and cd to C:\Program Files\Balloon
  4. Run this command:

Continue reading “Fixing Slow Windows VM boot on Proxmox KVM with balloon driver” »

Share