Let’s start uploading the exported ova file to the proxmox server. Extract the OVA file:
tar -xvf *.ova
This should output a couple of files from the OVA container, it should include an OVF file, which is the VM Defenition file, and a VMDK file, which is the actual hard disk image. Again, this may take a while.
Convert the vmdk to a Proxmox compatible qcow2 file:
qemu-img convert -f vmdk myvirtual-disk1.vmdk -O qcow2 qcowdisk.qcow2
We now need to get the image into a VM with some hardware so that we can begin to use it. This is where things get tricky – the OVF file is not compatible with Proxmox and needs to be manually understood. The principle here is we are going to use the Proxmox web GUI to create a VM and replace the empty disk image which is created with our recently converted qcow2 image.
You can use vi to open the OVF file and understand some of the basic settings which are required for the VM. Open the OVF file and look for the following XML tags:
Continue reading “Convert .ova and import it on Proxmox KVM virtualization” »
I hope you already know how to allow NFS from proxmox host server. if not, you may read my earlier post:
NFS fix on LXC Host Server
I was actually receiving a error like below:
# mount -t nfsd nfsd /proc/fs/nfsd
mount: nfsd is write-protected, mounting read-only
mount: cannot mount nfsd read-only
My proxmox edition was 5.0-30 and my CentOS was 7.
However, this is a bit different rather looking the other one as mentioned above. I was experiencing connecting my Centos 7 LXC container to a NFS server in the network. The regular tweak didn’t work. So, had to spend a while googling the solution. Found the correct one on a forum thread. But eventually it worked. For this you need to edit the file
nano /etc/pve/lxc/<your container ID>.conf
Add the below line in the conf file:
Reboot the container. And now try to connect the NFS server. It should work.
Configure Proxmox host for TLS connections: This configures the host to accept VNC connections.
aptitude install openbsd-inetd
Run this to get your KVM id’s :
root@homenet-home10 /etc # qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
101 freenas stopped 1024 32.00 0
102 debpbx running 512 0.00 573304
105 winxp stopped 512 15.01 0
7012 ltsp-ldap-openfire-KVM running 512 9.00 495870
7016 fbc16-kvm running 512 8.00 462697
7159 win7 stopped 2048 0.00 0
27014 ltsp-term-KVM stopped 512 0.00 0
edit /etc/inetd.conf , put a port for each kvm you want to access using kvm
59055 stream tcp nowait root /usr/sbin/qm qm vncproxy 105
59058 stream tcp nowait root /usr/sbin/qm qm vncproxy 7159
restart openbsd-inetd Continue reading “Enable VNC viewer for Proxmox 2.x/3.x with tightvnc” »
Code: INFO: setting parameters failed - VM is locked (backup)
ERROR: Backup of VM 516 failed - command 'qm set 516 --lock backup' failed with exit code 255
qm unlock <vmid>
Proxmox VE authentication server
This is a unix like password store (/etc/pve/priv/shadow.cfg). Password are encrypted using the SHA-256 hash method. Users are allowed to change passwords.
Terms and Definitions
A Proxmox VE user name consists of 2 parts: <userid>@<realm>. The login screen on the GUI shows them a separate items, but it is internally used as single string.
We store the following attribute for users (/etc/pve/user.cfg):
- first name
- last name
- email address
- expiration date
- flag to enable/disable account
The traditional unix superuser account is called ‘root@pam’. All system mails are forwarded to the email assigned to that account. Continue reading “User Management in Proxmox” »
I was installing Proxmox 4.X on my new server systems having SAS disk with LSI MPT2 Raid controller. The installation went just perfect, however, after post installation boot- I was getting errors as below similar screenshot-
After googling a lot, found the solution. Here goes it- Continue reading “Proxmox 4 Installation Issue on LSI Raid Systems” »