Server with CentOS 7 – 64bit
2 GB or more RAM (Recommended)
Root Privileges on the server
Step 1 – Install Java (JRE and JDK)
In this step, we will install the Java JRE and JDK from the CentOS repository. We will install Java 1.8.11 on the server with the yum command.
Run this command to install Java JRE and JDK from CentOS repository with yum:
yum -y install java-1.8.0-openjdk.x86_64 java-1.8.0-openjdk-devel.x86_64
It will take some time, wait until the installation finished.
Then you should check the Java version with the command below:
You should see results similar to the ones below:
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 1.8.0_111-b15)
OpenJDK 64-Bit Server VM (build 25.111-b15, mixed mode)
Step 2 – Configure the Java Home Environment Continue reading “Install Apache Tomcat 8.5 on CentOS 7.3” »
For this setup, we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.
I will use the following setup here:
Apache node 1: webserver1.tm.local (webserver1) – IP address: 192.168.0.103; Apache document root: /var/www
Apache node 2: webserver2.tm.local (webserver2) – IP address: 192.168.0.104; Apache document root: /var/www
Load Balancer node 1: loadb1.tm.local (loadb1) – IP address: 192.168.0.101
Load Balancer node 2: loadb2.tm.local (loadb2) – IP address: 192.168.0.102
Virtual IP Address: 192.168.0.105 (used for incoming requests)
In this tutorial I will use Ubuntu 8.04 LTS for all four nodes, just install basic Ubuntu 8.04 LTS on all four nodes. I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you! I also recommend you to have a DNS server in place. Continue reading “Load Balancing using Ldirectord on Linux (Apache) web server” »
MySQL multi-master replication is an excellent feature within MySQL. However, there is only one problem; standard multi-master replication seems to never be as stable as something like master-slave replication. It is always in need of attention. That is where Percona comes into play. The Percona team has developed an amazing product dubbed Percona XtraDB cluster. XtraDB features world class multi-master replication powered by Galera. So, what are we waiting for? Let’s get started.
A Linux distro of your choice. In this guide, we will be using Debian 7. You can use a different distro if you would like. (Note that you may need to adapt this guide to work with the distro of your choice)
Two nodes running the same OS. Basic knowledge of the command line and SSH.
SSH into your virtual machines.
Add Percona’s repositories.
On both nodes, execute the following command:
echo -e "deb http://repo.percona.com/apt wheezy main\ndeb-src http://repo.percona.com/apt wheezy main" >> /etc/apt/sources.list.d/percona.list && apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
Now we need to update the sources:
Install Percona-XtraDB Cluster
The installation is straightforward: Continue reading “Setup Percona on Debian 7” »
To add the CentOS 7 EPEL repository, open terminal and use the following command:
yum install epel-release
Since we are using a sudo command, these operations get executed with root privileges. It will ask you for your regular user’s password to verify that you have permission to run commands with root privileges. Now that the Nginx repository is installed on your server, install Nginx using the following yum command:
yum install nginx
Afterwards, your web server is installed. Once it is installed, you can start Nginx on your VPS:
systemctl start nginx
You can do a spot check right away to verify that everything went as planned by visiting your server’s public IP address in your web browser (see the note under the next heading to find out what your public IP address is if you do not have this information already): Continue reading “Install LEMP with phpmyadmin on CentOS 7” »
Httperf is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance.
NOTE : for accurate results, it’s best to run httperf from a remote machine and not from the localhost
to install httperf in red-hat based distributions (additional repo are needed. For centos you’ll need rpmforge, see here for installation)
yum install httperf
or in debian based
apt-get install httperf
An example of httperf stress test Continue reading “stress test your web server with httperf” »
This means someone has full access to the system, here are the tell tale signs in order of most likely to give you a quick feel for what’s going on.
1. Have a look for system files that have changed recently. This is the first thing I would do.
find /etc /var -mtime -2
The “-2” means 2 days, i.e. show me all files modified in the last 2 days.
Now if you haven’t installed any new software on your server for a while then this command will run and produce very little output. For a server I investigated there were references to postfix. clearly someone had installed a mail server probably for sending spam.
2. Run who
user1 pts/2 2012-03-28 13:38 (18.104.22.168)
This should give you a list of users on the system, what you’re looking for is users other than yourself especially root. Continue reading “How to check if your server has been hacked” »
There are many ways to keep a process running on linux but I haven’t seen any that are as easy to implement as the script below.
Basically the script does a ps ax and then a grep for your process. If it’s not running it will re-start the process.
You install the script into your crontab i.e. crontab -e
As a bonus this mechanism will re-start your process after a re-boot. Continue reading “How to keep a job running in Linux” »
The Curl syntax allows you to specify sequences and sets of URL’s. Say for example we’re going to run a load stress test against Google we can run…
curl -s "http://google.com?[1-1000]"
This will make 1000 calls to google i.e. Continue reading “How to quickly stress test a web server” »
Lately, I was trying to migrate vesta hosted sites from one server to another. This trick might help to those who-
- Either tried to update IP (after tried with so many vesta forum links!) and failed or
- Have installed on a physical computer and need to move out sites on newer setups!
Make user backup on the old server. In this example we will use admin as the reference.
Copy tarball to the new server and place it in the /home/backup directory Continue reading “Migrate hosting sites from one VestaCP to another VestaCP” »
By default you can no longer login using ssh as root with just a password since it is more secure to use a pre-shared key. However, you can you can still enable root logins using password authentication.
To do this you need to edit the ssh config file ‘/etc/ssh/sshd_config/sshd_config’ as root.
# vi /etc/ssh/sshd_config
Then find the entry in the Authentication section of the file that says ‘PermitRootLogin’ and change ‘without-password’ to ‘yes’. Continue reading “Enable root logins using ssh in Debian 8.0” »