Install h5ai – file indexer on Centos 7

H5ai is a modern file indexer for HTTP web servers with focus on your files. Directories are displayed in a appealing way and browsing them is enhanced by different views, a breadcrumb and a tree overview. Initially h5ai was an acronym for HTML5 Apache Index but now it supports other web servers too.

Step 1 – Installing (Requires PHP 5.5+)

wget https://release.larsjung.de/h5ai/h5ai-0.29.2.zip
unzip h5ai-0.29.2.zip

Copy folder _h5ai to the document root directory of the web server: DOC_ROOT/_h5ai.

DOC_ROOT
 ├─ _h5ai
 ├─ your files
 └─ and folders

Add /_h5ai/public/inde to http conf file Continue reading “Install h5ai – file indexer on Centos 7” »

Share

Proxmox Cloud-Init OS template creation

Introduction

In this guide we will go over creating a Proxmox KVM Template from a Cloud Image. This same process will work for any Cloud-Init Openstack based image type you can find online.

Having done a number of these for our Proxmox based VPS service I wanted to post up a guide to help anyone else looking to do the same thing.

My workflow for customizing one of those for use with Proxmox with cloud-init deployment from WHMCS and root login is below. Once you setup one template you can rapidly reinstall new containers and test stuff.

Setup Environment

If not installed already installed you will need libguestfs-tools :

apt-get install libguestfs-tools

To edit the image before importing. We will use virt-edit which is a part of libguestfs-tools. Continue reading “Proxmox Cloud-Init OS template creation” »

Share

Accessing private network using pritunl VPN

This tutorial will describe securing access to a private network using a Pritunl server. The diagram below shows the network topology for this tutorial.

First remove the 0.0.0.0/0 route from the server. This route tunnels all internet traffic over the vpn, for this setup only the traffic for the private network will be tunneled. Continue reading “Accessing private network using pritunl VPN” »

Share

Install Pritunl on Ubuntu 16

Update your bare-bone and freshly installed Ubuntu 16 system.

sudo apt-get update && sudo apt-get upgrade

Add Pritunl’s APT repository and update the package lists:

echo "deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse" > /etc/apt/sources.list.d/mongodb-org-3.0.list
echo "deb http://repo.pritunl.com/stable/apt trusty main" > /etc/apt/sources.list.d/pritunl.list

Add repo keys for apt to validate against

apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv 7F0CEB10
apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv CF8E292A

Update the package cache

sudo apt-get update

If you have a firewall running on the Linode, add exceptions for Pritunl’s Web UI and server:

sudo iptables -A INPUT -p udp -m udp --sport 9700 --dport 1025:65355 -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --sport 9700 --dport 1025:65355 -j ACCEPT
sudo iptables -A INPUT -p `your protocol here` -m `your protocol here` --sport `your_port_here` --dport 1025:65355 -j ACCEPT

NoteIf you’ve configured the firewall according to the Securing Your Server guide, be sure to add these port ranges to the /etc/iptables.firewall.rules file.

Install Pritunl and its required dependencies:

sudo apt-get install python-software-properties pritunl mongodb-org

Start the Pritunl service:

sudo service pritunl start

Open a web browser on your computer, and navigate to https://123.45.67.89:9700, replacing 123.45.67.89 with your VM IP address. You will see a screen similar to this:

Continue reading “Install Pritunl on Ubuntu 16” »

Share

Install Discourse Forum with Nginx on Ubuntu 16.04

Step 1 – Install Docker on Ubuntu 16.04

The Discourse software is written in Ruby and Javascript, using PostgreSQL as the main database, and Redis as a cache and for transient data. We will install Discourse under the Docker container.
 
The installation process will be carried out on Ubuntu 16.04. So to begin with, install Docker using the command below.

wget -qO- https://get.docker.com/ | sh

 

After the installation is complete, check the docker service and make sure it’s already running on the system. Continue reading “Install Discourse Forum with Nginx on Ubuntu 16.04” »

Share

Linux Monitoring using Grafana InfluxDB and Telegraf on Debian 10

The basic installation of Grafana InfluxDB and Telegraf is described in my other post here-

Install Grafna, InfluxDB, Telegraf for Jitsi Video Meet Monitoring on Debian 10

All is needed is to create a telegraf configuration file:

nano /etc/telegraf.d/dashboard.conf

# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
## Environment variables can be used as tags, and throughout the config file
# user = "$USER"

# Configuration for telegraf agent
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
debug = false
quiet = false
hostname = ""
omit_hostname = false

### OUTPUT

# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
urls = ["http://your_host:8086"]
database = "telegraf_metrics"

## Retention policy to write to. Empty string writes to the default rp.
retention_policy = ""
## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
write_consistency = "any"

## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
# username = "telegraf"
# password = "2bmpiIeSWd63a7ew"
## Set the user agent for HTTP POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
## Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512

# Read metrics about cpu usage
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## Comment this line if you want the raw CPU time metrics
fielddrop = ["time_*"]

# Read metrics about disk usage by mount point
[[inputs.disk]]
## By default, telegraf gather stats for all mountpoints.
## Setting mountpoints will restrict the stats to the specified mountpoints.
# mount_points = ["/"]

## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
## present on /run, /var/run, /dev/shm or /dev).
ignore_fs = ["tmpfs", "devtmpfs"]

# Read metrics about disk IO by device
[[inputs.diskio]]
## By default, telegraf will gather stats for all devices including
## disk partitions.
## Setting devices will restrict the stats to the specified devices.
# devices = ["sda", "sdb"]
## Uncomment the following line if you need disk serial numbers.
# skip_serial_number = false

# Get kernel statistics from /proc/stat
[[inputs.kernel]]
# no configuration

# Read metrics about memory usage
[[inputs.mem]]
# no configuration

# Get the number of processes and group them by status
[[inputs.processes]]
# no configuration

# Read metrics about swap memory usage
[[inputs.swap]]
# no configuration

# Read metrics about system load & uptime
[[inputs.system]]
# no configuration

# Read metrics about network interface usage
[[inputs.net]]
# collect data only about specific interfaces
# interfaces = ["eth0"]

[[inputs.netstat]]
# no configuration

[[inputs.interrupts]]
# no configuration

[[inputs.linux_sysctl_fs]]
# no configuration

Update the necessary paramaters accordign to your installations.

Import the Grafana Dashboard Template from here-

https://grafana.com/grafana/dashboards/928

Share

Install Netbox on Docker

The first thing to do is the installation of Docker. To do this, open a terminal window and issue the following commands:

Install Docker with the command: 

sudo apt-get install docker.io -y

Add your user to the docker group with the command: 

sudo usermod -aG docker $USER.

Log out and log back in to the server. Install docker-compose with the command: 

sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Change the permission of the docker-compose command with the command: 

sudo chmod +x /usr/local/bin/docker-compose

Start the docker daemon with the command 

sudo systemctl start docker

Enable the docker daemon with the command 

sudo systemctl enable docker

To get Netbox Docker up and running run the following commands. There is a more complete Getting Started guide on our wiki which explains every step.

Create Direcotry:

mkdir /var/netbox
cd /var/netbox
git clone -b release https://github.com/netbox-community/netbox-docker.git
cd netbox-docker
tee docker-compose.override.yml <<EOF
version: '3.4'
services:
nginx:
ports:
- 8000:8080
EOF
docker-compose pull
docker-compose up (allow to run docker in the foregorund)
docker-compose up -d  (allow to run docker in the background)

The whole application will be available after a few minutes. Open the URL http://0.0.0.0:8000/ in a web-browser. You should see the Netbox homepage. In the top-right corner you can login. The default credentials are:

Username: admin
Password: admin
API Token: 0123456789abcdef0123456789abcdef01234567

How to access Netbox

It will take around two to five minutes before Netbox becomes available. During that time, issue the command:

echo "http://$(docker-compose port nginx 8080)/"

The above command will print out the exact port you should use to access Netbox. In my case the following output is printed:

http://0.0.0.0:8000/

Tips:

Before you deploy the container, you’ll want to edit the .env file and configure it to meet your needs. Issue the command:

nano env/netbox.env

In that file, you might want to change the line:

SUPERUSER_PASSWORD=admin

The above is the default password for the admin user. Change that to something unique and strong. Alter any other options you might want (such as SUPERUSER_EMAIL) and save the file. 

Src: 
https://www.techrepublic.com/article/how-to-deploy-the-netbox-network-documentationmanagement-tool-with-docker/
https://github.com/netbox-community/netbox-docker

Share

Install Grafna, InfluxDB, Telegraf for Jitsi Video Meet Monitoring on Debian 10

Step 1: Install InfluxDB

apt update && apt install -y gnupg2 curl wget
wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/debian buster stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
apt update && apt install influxdb -y
systemctl enable --now influxdb
systemctl status influxdb

If you run a firewall (i.e. ufw) on this server, open the port for influxdb and grafana webserver:

ufw allow 8086/tcp
ufw allow 3000/tcp

Step 2: Install Grafana to display stats dashboards

curl https://packages.grafana.com/gpg.key | sudo apt-key add -
add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
apt update && apt install grafana -y
systemctl enable --now grafana-server
systemctl status grafana-server

Step 3: Install & configure telegraf

wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
echo "deb https://repos.influxdata.com/debian buster stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
apt update && apt install telegraf -y
mv /etc/telegraf/telegraf.conf /etc/telegraf/telegraf.conf.original
nano /etc/telegraf/telegraf.conf

Enter following contents in telegraf.conf:

[global_tags]

###############################################################################
# GLOBAL #
###############################################################################

[agent]
interval = "10s"
debug = false
hostname = "jitsi_host"
round_interval = true
flush_interval = "10s"
flush_jitter = "0s"
collection_jitter = "0s"
metric_batch_size = 1000
metric_buffer_limit = 10000
quiet = false
logfile = ""
omit_hostname = false
nano /etc/telegraf/telegraf.d/jitsi.conf

Enter following contents in jitsi.conf:

###############################################################################
# INPUTS #
###############################################################################
[[inputs.http]]
name_override = "jitsi_stats"
urls = [
"http://116.203.231.172:8080/colibri/stats"
]
data_format = "json"
###############################################################################
# OUTPUTS #
###############################################################################
[[outputs.influxdb]]
urls = ["http://localhost:8086"]
database = "jitsi"
timeout = "0s"
retention_policy = ""

We enable start on boot and start telegraf now on server “jitsi”:

systemctl enable --now telegraf
systemctl status telegraf

(Mind: We will not create a database as Telegraf will create our database if it does not find one)

Server: "jitsi"

Step 4: Adapt Jitsi onfiguration to expose stats

nano /etc/jitsi/videobridge/config

Make sure to configure the jvb options:

JVB_OPTS="--apis=rest,xmpp"

and

nano /etc/jitsi/videobridge/sip-communicator.properties

Here we configure colibri statistics:

org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc,colibri
service jitsi-videobridge2 restart

Check output in the terminal on the jitsi server:

curl -v http://127.0.0.1:8080/colibri/stats
Response: {"inactive_endpoints":0,"inactive_conferences":0,"total_ice_succeeded_relayed":0,"total_loss_degraded_participant_seconds":0,"bit_rate_download":0,"muc_clients_connected":1,"total_participants":0,"total_packets_received":0,"rtt_aggregate":0.0,"packet_rate_upload":0,"p2p_conferences":0,"total_loss_limited_participant_seconds":0,"octo_send_bitrate":0,"total_dominant_speaker_changes":0,"receive_only_endpoints":0,"total_colibri_web_socket_messages_received":0,"octo_receive_bitrate":0,"loss_rate_upload":0.0,"version":"2.1.169-ga28eb88e","total_ice_succeeded":0,"total_colibri_web_socket_messages_sent":0,"total_bytes_sent_octo":0,"total_data_channel_messages_received":0,"loss_rate_download":0.0,"total_conference_seconds":0,"bit_rate_upload":0,"total_conferences_completed":0,"octo_conferences":0,"num_eps_no_msg_transport_after_delay":0,"endpoints_sending_video":0,"packet_rate_download":0,"muc_clients_configured":1,"conference_sizes":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"total_packets_sent_octo":0,"conferences_by_video_senders":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"videostreams":0,"jitter_aggregate":0.0,"total_ice_succeeded_tcp":0,"octo_endpoints":0,"current_timestamp":"2020-04-17 23:14:38.468","total_packets_dropped_octo":0,"conferences":0,"participants":0,"largest_conference":0,"total_packets_sent":0,"total_data_channel_messages_sent":0,"total_bytes_received_octo":0,"octo_send_packet_rate":0,"conferences_by_audio_senders":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"total_conferences_created":0,"total_ice_failed":0,"threads":37,"videochannels":0,"total_packets_received_octo":0,"graceful_shutdown":false,"octo_receive_packet_rate":0,"total_bytes_received":0,"rtp_loss":0.0,"total_loss_controlled_participant_seconds":0,"total_partially_failed_conferences":0,"endpoints_sending_audio":0,"total_bytes_sent":0,"mucs_configured":1,"total_failed_conferences":0,"mucs_joined":1}

With this response we see that the rest api responds and can be used for our purpose! To make sure that the colibri rest endpoint can be accessed by telegraf:

ufw allow 8080/tcp

Configure dashboards in Grafana

Open Grafana in browser: http://<Your-IP>:3000 31 and prepare your admin account

Add datasource.

We will add an InfluxDB datasource and set it to be the default.

Name: InfluxDB
Default: On
HTTP URL: http://localhost:8086 14
HTTP Access: Server (default)
Database: jitsi

Download a Dashboard.

https://grafana.com/grafana/dashboards/11969

A collection of all fields from the jitsi rest api response is available in the json attached for convenience.

Src: https://community.jitsi.org/t/how-to-to-setup-grafana-dashboards-to-monitor-jitsi-my-comprehensive-tutorial-for-the-beginner/38696

Share

Install RClone for synching Server contents to Cloud storage- google drive, onedrive, dropbox or own/nextcloud

Use case with Jibri Recorded content to push to cloud storage operators.

Rclone installation (Debian 10)

All below commands are executed as ‘root’. (I know!..)

apt update
apt install curl -y
curl https://rclone.org/install.sh | bash

Rclone is now installed. We need to find where rclone expects it’s config file:

rclone config file

Response:

Configuration file doesn’t exist, but rclone will use this path:
/root/.config/rclone/rclone.conf

So we need to upload the file from our windows pc/laptop (C:\Users\[user]\.config\rclone\rclone.conf) to the location on the Jibri server (/root/.config/rclone/rclone.conf). (I used WinSCP for this). After upload, we check once more to be sure rclone finds it’s config: Continue reading “Install RClone for synching Server contents to Cloud storage- google drive, onedrive, dropbox or own/nextcloud” »

Share

How to increase memory size for MySQL Server

To increase the memory size for a MySQL Server, follow these steps:

1.Enter management mode by typing your password and pressing Enter twice. Select Exit to terminal using the arrow keys and then press Enter

2.Type:

nano /etc/my.cnf

3.Locate the line innodb_buffer_pool_size = 1024M and change the number to 50% of RAM of the VM. 1024M means 1024 megabytes.

4.Press Ctrl+X to exit the text editor, then press Y to save.

5.Reboot the appliance using the Restart system option in management mode.

Share