There are six steps to correctly configuring SNMP on your Citrix Xen hypervisor. These steps don’t require a system restart and are non-service affecting.
To start, we assume you’re running Xen v6.x or v7.x, and are logged into the Xen CLI as root.
1. Enable the SNMP daemon
Enable the snmpd daemon by typing-
2. Configure the SNMP service
Make a backup of the snmpd.conf file. The default snmpd.conf file contains a lot of useful documentation for more advanced implementations of SNMP.
# cp /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.backup
Continue reading “enable SNMP on Xen (XCP-NG) hypervisors” »
First try to unlock the pct (assuming your troubled container is 101):
pct unlock 101
if it works just stop and start again the vm. if it does’nt work (my case) try to stop with this
lxc-stop --name 101
if it’s does’nt work (my case) you can force stop with kill command
ps ax | grep lxc
then kill the process with your id (101 for me) kill pid (replace pid by the process name). After that you can just launch again you’r vm
Copy and paste following command to the terminal
(6.1 and up)
(6.2-11 and up)
(6.2-12 and up) Continue reading “Remove Proxmox Subscription Notice” »
By default, Remote Display only works on localhost / 127.0.0.1 and cannot be accessed by ip address or hostname.
Check VRDE / Remote Display IP Address
You can check VRDE / Remote Display ip address using the following methods:
Open command prompt and run netstat -an |find /i “listening” or netstat -an |find /i “[PORT_NUMBER]” and you shall notice it is listening on 127.0.0.1:PORT. Continue reading “Virtualbox fixing VRDE on 0.0.0.0 instead 127.0.0.1” »
Step 1 : Migrate all VMs to another active node
Migrate all VMs to another active node. You can use the live migration feature if you have a shared storage or offline migration if you only have local storage.
Step 2 : Display all active nodes
Display all active nodes in order identify the name of the node you want to remove Continue reading “Remove Node from Proxmox Cluster” »
When trying to “Stop” or “Shutdown” virtual machine from Proxmox (PVE) web gui, the “Cluster log” shows
end task UPID:pve:xxxxxxxx:xxxxxxxx:xxxxxxx:qmstop:xxx:root@pam: can’t lock file ‘/var/lock/qemu-server/lock-xxx.conf’ -got timeout
end task UPID:pve:xxxxxxxx:xxxxxxxx:xxxxxxx:qmreboot:xxx:root@pam: VM quit/powerdown failed
We can manually delete the lock from following path
# The file will be
Make sure only delete the correct one!
You can also do it using script from this site-
Sparse disk image formats such as qcow2 only consume the physical disk space which they need. For example, if a guest is given a qcow2 image with a size of 100GB but has only written to 10GB then only 10GB of physical disk space will be used. There is some slight overhead associated, so the above example may not be strictly true, but you get the idea.
Sparse disk image files allow you to over allocate virtual disk space – this means that you could allocate 5 virtual machines 100GB of disk space, even if you only have 300GB of physical disk space. If all the guests need 100% of their 100GB disk space then you will have a problem. If you use over allocation of disk space you will need to monitor the physical disk usage very carefully.
There is another problem with sparse disk formats, they don’t automatically shrink. Let’s say you fill 100GB of a sparse disk (we know this will roughly consume 100GB of physical disk space) and then delete some files so that you are only using 50GB. The physical disk space used should be 50GB, right? Wrong. Because the disk image doesn’t shrink, it will always be 100GB on the file system even if the guest is now using less. The below steps will detail how to get round this issue. Continue reading “Reclaim disk space from a sparse image file qcow2/ vmdk” »
In this guide we will go over creating a Proxmox KVM Template from a Cloud Image. This same process will work for any Cloud-Init Openstack based image type you can find online.
Having done a number of these for our Proxmox based VPS service I wanted to post up a guide to help anyone else looking to do the same thing.
My workflow for customizing one of those for use with Proxmox with cloud-init deployment from WHMCS and root login is below. Once you setup one template you can rapidly reinstall new containers and test stuff.
If not installed already installed you will need libguestfs-tools :
apt-get install libguestfs-tools
To edit the image before importing. We will use virt-edit which is a part of libguestfs-tools. Continue reading “Proxmox Cloud-Init OS template creation” »
Download the Win-Virtio Driver and load it on VM CDRom Drive. Download can be found here:
Now install the Virtio Balloon driver AND the Balloon service in the guest as follows:
- Open Device Manager and see if there is an unknown PCI device. If so, right click it and install the driver manually from D:\Balloon\2K16\amd64 (or 2k12, 2k8, etc)
- Now copy the entire amd64 folder into C:\Program Files\ (NOT x86) and rename it “Balloon”. So, now you have the amd64 folder from the disc copied as C:\Program Files\Balloon
- Open an Administrative Command Prompt and cd to C:\Program Files\Balloon
- Run this command:
Continue reading “Fixing Slow Windows VM boot on Proxmox KVM with balloon driver” »