ip_conntrack table full, dropping packet

Last week, I found myself with a server under low load, but it couldn’t make or receive network connections. When I ran dmesg, I found the following line repeating over and over:

ip_conntrack: table full, dropping packet
I’d seen this message before, but I headed over to Red Hat’s site for more details. It turns out that the server was running iptables, but it was under a very heavy load and also handling a high volume of network connections. Generally, the ip_conntrack_max is set to the total MB of RAM installed multiplied by 16. However, this server had 4GB of RAM, but ip_conntrack_max was set to 65536:

# cat /proc/sys/net/ipv4/ip_conntrack_max
65536

I logged into another server with 1GB of RAM (RHES 5, 32-bit) and another with 2GB of RAM (RHES 4, 64-bit), and both had ip_conntrack_max set to 65536. I’m not sure if this is a known Red Hat issue, or if it’s just set to a standard value out of the box.

Read more

Share

Auto installation of Zenoss on CentOS 6

The simplest way to install Zenoss Core 4.2 on a newly-deployed RHEL/CentOS 64-bit 5/6 system is to use our auto-deploy script, which downloads all required files for you. To use the script, first set up a new server running one of our supported operating systems. Then, as root, run the following commands:

# wget https://github.com/zenoss/core-autodeploy/tarball/4.2.5 -O auto.tar.gz
# tar xvf auto.tar.gz
# cd zenoss-core-autodeploy-*

Now, you have the option of editing zenpack_actions.txt, which defines all ZenPacks that will be installed by default (all Core ZenPacks). If you would like to avoid installing certain ZenPacks, then remove the corresponding lines from this file and save it.

Now you are ready to install Zenoss Core 4.2.5:

# ./core-autodeploy.sh #do not use ''tee'' or similar, /opt/zenoss/log/install.log will be created by the script.

Src: http://wiki.zenoss.org/Install_Zenoss

Share

413 Request Entity Too Large

If you’re getting 413 Request Entity Too Large errors trying to upload with nginx.net/, you need to increase the size limit in nginx.conf . Add ‘client_max_body_size xxM’ inside the server section, where xx is the size (in megabytes) that you want to allow.

http {
 include mime.types;
 default_type application/octet-stream;
 sendfile on;
 keepalive_timeout 65;
server {
 client_max_body_size 20M;
 listen 80;
 server_name localhost;
# Main location
 location / {
 proxy_pass http://127.0.0.1:8000/;
 }
 }
}
Share

Understanding NAT, Direct Routing & Tunneling

Virtual Server via NAT
The advantage of the virtual server via NAT is that real servers can run any operating system that supports TCP/IP protocol, real servers can use private Internet addresses, and only an IP address is needed for the load balancer.

The disadvantage is that the scalability of the virtual server via NAT is limited. The load balancer may be a bottleneck of the whole system when the number of server nodes (general PC servers) increase to around 20 or more, because both the request packets and response packets are need to be rewritten by the load balancer. Supposing the average length of TCP packets is 536 Bytes, the average delay of rewriting a packet is around 60us (on Pentium processor, this can be reduced a little by using of higher processor), the maximum throughput of the load balancer is 8.93 MBytes/s. Assuming the average throughput of real servers is 400Kbytes/s, the load balancer can schedule 22 real servers.

Read more

Share

Testing Freeradius of Pfsense

FreeRADIUS offers an easy to use command line tool to check if the server is running and listening to incoming requests. Aninterface, a NAS/Client and a user must all be configured:

  • Add a User with the following configuration:Username: testuser
    Password: testpassword
  • Add a Client/NAS with the following configuration:IP-Address: 127.0.0.1
    Shared Secret: testing123
  • Add an interface with the following configuration:IP-Address: 127.0.0.1
    Interface-Type: Auth
    Port: 1812
  • SSH to the pfSense firewall and type in the following on the command line while FreeRADIUS is running (check before in System Log):
    radtest testuser testpassword 127.0.0.1:1812 0 testing123

The following output should appear if everything was setup correctly:

Read more

Share

TeamViewer for Headless Linux Unattended System Access

Googled for hours, couldn’t found a solid documentation on this. After many different stitching material- prepared a little moderate installation (at least it worked for me). My Linux OS is Debian 8.x- believe should work in other debian version and Ubuntu as well. But, before continuing this, make sure-

  1. You have a teamviewer account
  2. The workstation (assuming a windows client pc) has a teamviewer client program installed to access the headless remote linux system.

Read more

Share

NFS fix on LXC Host Server

NFS client on LXC seems do not work. Why? The problem is apparmor on the real machine that block any appempt to mount NFS volumes.
In order to try to minimize the security changes on apparmor I add the following lines in/etc/apparmor.d/lxc/lxc-default

# allow nfs mount everywhere

mount fstype=rpc_pipefs, 
mount fstype=nfs,

Then

$ /etc/init.d/apparmor reload

And now I was able to restart nfs-common and nfs-kernel-server without errors !

Update!!!!!

nano /etc/apparmor.d/lxc/lxc-default

Update the file as below-

# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>

# the container may never be allowed to mount devpts. If it does, it
# will remount the host's devpts. We could allow it to do it with
# the newinstance option (but, right now, we don't).
# deny mount fstype=devpts,

# allow nfs mount everywhere

mount fstype=rpc_pipefs,
mount fstype=nfs,
}

sasasa

Now read the other article on how to connect to NFS server from LXC container

Share

Remote Administering pfsense

To open the firewall GUI up completely, create a firewall rule to allow remote firewall administration – do not create a port forward or any other NAT configuration.

Example Firewall Rule Setup

  • Firewall > Rules, WAN Tab
  • Action: pass
  • Interface: WAN
  • Protocol: TCP
  • Source: Any (or restrict by IP/subnet)
  • Destination: WAN Address
  • Destination port range: HTTPS (Or the custom port)
  • Description: Allow remote management from anywhere (Dangerous!)

Read more

Share

Reverse Proxy with Caching

A Sample Nginx Reverse proxy configuration- an alternative to Varnish cache (kind of more simplistic)-

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
 worker_connections 768;
 # multi_accept on;
}
http {
 proxy_cache_path /cache levels=1:2 keys_zone=STATIC:10m
 inactive=24h max_size=1g;
 server {
 location / {
 proxy_pass http://127.0.0.1:8080;
 proxy_set_header Host $host;
 proxy_cache STATIC;
 proxy_cache_valid 200 1d;
 proxy_cache_use_stale error timeout invalid_header updating
 http_500 http_502 http_503 http_504;
 }
 }
}
Share

Reinstalling MySQL on CentOS/Redhat 6

Some time we faces issues with MySQL installation on Linux machine. If we simply remove MySQL packages and re-install doesn’t fixes the issue, in that case old settings may still exists on server which again affects new install. In that case first uninstall MySQL completely from system and erase all settings of old install. To do the same follow the below settings.

Note: Please do not use below steps if MySQL have any running databases.

Step 1: Uninstall MySQL Packages
First uninstall all the MySQL packages installed on your server

# yum remove mysql mysql-server

Step 2: Romove MySQL Directory
Now we need to remove MySQL data directory from system which by default exists at/var/lib/mysql. If you didn’t find this, It may be changed to some other place, which you can find in my.cnf file with variable datadir. Delete the /var/lib/mysql directory from system but we prefer to rename it to keep a backup of existing files.

# mv /var/lib/mysql /var/lib/mysql_old_backup

Read more

Share