Migrate WordPress to Media Temple Plesk Server – Part 1

Hey All,


I thought that I would put this together to show the steps to migrate a wordpress site over to a Media Temple DV server with Plesk Onyx. The following procedures will work with other host providers as well. There are some things to consider before moving any web site to another server or location. You need to look at the following:


  • Operating System type – CentOS, Debian, Ubuntu, RedHat Enterprise
    • Each operating system handles things very similarly but also a little different. It is a matter of opinion, I guess, as to which operating system to use for your server. Most have gone with a CentOS / Redhat based system as it is built for enterprise servers.  An example would be that CentOS / RedHat Enterprise use httpd (Apache) or Nginx for its hosting software. Debian / Ubuntu use apache2 for its web hosting platform.
    • There are folder structure differences as well which are dictated not only by the operating system but also by the application being used.
    • The content will work under any operating system. You just need to make sure that the content is within the correct folder location. The following folders are default for certain platforms and can be changed to other locations.
      • Apache – /var/www/html
      • Plesk w/ Apache – /var/www/vhosts
      • CPanel w/ Apache – /home/domain.com


  • DNS Changes – The DNS zone file will need to be modified in order to have the web traffic point to the new server location. Keep in mind that most DNS changes will need to be last giving you time to make sure that the correct content is in place and ready to go.
    • At the registrar, i.e. Name Cheap, GoDaddy, etc, add or change the name servers to ns1.mediatemple.net and ns2.mediatemple.net.
    • Name servers for Media Temple are ns1,mediatemple.net and ns2.mediatemple.net. These need to be changed at the registrar such as Name Cheap or who ever is hosting the zone file. In Name Cheap I had to add new name server entries in order to point back to Media Temple. It took about an hour for propagation to complete.
    • Make sure that the zone file has been created within the Media Temple Account Center of your account. If this has not been complete, DNS will not propagate.


  • Content Backups – First and most important, make sure that you have backups of your WordPress site and databases. Backup plugins such as WPBackItUp will work for this process and backup content with plugins and themes as well as your database content.



  • Which Migration Tool To Use – This is totally up to you and your experience level. Some plugins that WordPress has available tend to get into the weeds while other are much simpler. The one that I found which worked real well is called WP Clone. You install this plugin on the server that you are backing up and with the Create Backup button selected, click on the Create Backup button. It does pull down a copy of the WP content, themes, plugins and database to be transferred. You will be presented with a popup which includes the URL where the backup is located. Install the same plugin on the new server and add the URL to the Restore from URL box.







Steps Needed to Move Content


Create the Domain in Plesk Onyx:


  • Log into your Plesk Onyx Panel
  • Select Domain on the left hand side of the panel under Hosting Services



  • Add a new domain to the Plesk Panel by clicking on the Add Domain Button



  • You will be presented with the following panel to add you new domain account



  • Add the domain name
  • Choose the subscription or create a new one
  • Give a username for the new domain account
  • Assign a secure password for the new username
  • Click the Ok button


Adding a Subscription to the Plesk Panel:


If you need to add a subscription then you can do the following. Keep in mind that it is the same process as adding a domain to the Plesk Panel.


  • Log into your Plesk Onyx Panel
  • Select Subscriptions on the left hand side of the panel under Hosting Services


  • Add a new subscription to the Plesk Panel by clicking on the Add Subscription Button



  • You will be presented with the following panel to add you new subscription account



  • Add the domain name
  • Choose the subscription or create a new one
  • Give a username for the new domain account
  • Assign a secure password for the new username
  • Choose a service plan to use if you do not want to use the default
  • Click the Ok button
  • In the subscriptions panel, you will see the domain name that you created. If you click on that domain name you will be sent to a control panel to work with the different aspects of the domain. You can access this same panel by clicking on Domains to the left side of the screen and clicking on the domain name in your list.




Create Backup and Migrate Data:


  • Create a backup of your existing WP instance which should include WP, themes, plugins and the database.
  • Install WP Clone Plugin on the older WP server.
  • Issue a backup as shown above with the WP Clone Plugin.



  • You will be presented with a backup URL which will be used during the migration process.



  • Install WP within your Plesk Onyx Panel.
  • Install WP Clone Plugin on the newer WP server.
  • From within the your new WP instance, select the WP Clone Plugin and paste in the URL that was shown above during the backup process






Fix WordPress Admin Credentials:

One issue that you will run into is that you may not be able to connect to the WordPress Admin panel due to a bad admin password. Below are the steps that I took to change the admin password so that I could log into my site again.


  • After the migration has complete, you will want to open phpmyadmin with the Plesk Panel.
  • In the Plesk Panel when you select the domain name that you are working with, look at the right hand side of the screen and you see Databases as shown below.


  • Once you click on Databases, you will enter the database panel
  • Select the phpmyadmin button as shown below



  • Once you have clicked on phpmyadmin, choose the wp_users table as shown below. This is where you will update the admin account password.



  • Now you will be presented with with the table that shows the admin user account. Click on the edit button to make the changes you need. The password that you see is encrypted but don’t fear, I will show you what to do.



  • The are a few things to look at here
    • Make sure that the user name, display name and user nicename are all the same
    • Click on the Function box in the password field and select MD5
    • Add your password in the Password Value field
    • Click Go when ready



  • Now that you have the password changed, you are ready to try your login. Go to your domai.com/wp-admin and you will be presented with a log in screen.
  • Type in your username, usually admin
  • Type in your password
  • You should be able to log in just fine.

WordPress Security

Since the number of WordPress sites has grown tremendously, there are things which need to be done to make sure that the blog site and the data are not compromised and if they are, how to fix the problem. The following information is from information that I have put together while working WordPress compromises over the past few years and I hope that it helps everyone to be able to stop issues before they arise.


Keep in mind that some of the steps below can be used within other Content Management Systems (CMS) such as Joomla and Drupal as well.


Table of Contents


  • Basic WordPress Security
  • WordPress Permissions
    • File Permissions
    • Folder Permissions
  • WordPress Brute Force Attacks
    • WordPress wp-login block using Fail2ban
    • Restrict Access To WordPress Admin Panel
  • Related articles




Basic WordPress Security


I put together a presentation on this subject which can be found is you click on – WordPress Security Presentation


There are things which can cause a content management system such as WordPress to become unstable or even compromised. One of the items which should be looked at is the release information of the WordPress installation. If the customer is unsure of the version or versions that they are running, the following will help find that out. Latest version as of this writing is 4.2.2. 

  • Run the following to find the version installed from within the web site document root folder – locate wp-includes/version.php | xargs -l1 grep -H “wp_version =”Ensure the all plugins and themes are up to date. This is something that the customer will need to do from within the WordPress Admin panel. WordPress is good at telling the administrator what needs to be updated.Do not use plugins from a place not associated with the WordPress site themselves. It is better to have the plugins verified by WordPress than not.A customer should be interested in locked down their WordPress instance and getting more our of security, so below is a list of plugins that will help with this.



Akismet – “Akismet is quite possibly the best way in the world to protect your blog from comment and trackback spam.”

Block Bad Queries – “Protect WordPress Against Malicious URL Requests”

Health Check = “Checks the health of your WordPress install” 

Spam Free WordPress = “Comment spam blocking plugin that uses anonymous password authentication to achieve 100% automated spam blocking with zero false positives”

Ultimate Security Checker = “Security plugin which performs all set of security checks on your WordPress installation”

WordPress File Monitor Plus = “Monitor your website for added/changed/deleted files”

WordPress Firewall 2 = “This WordPress plugin monitors web requests to identify and stop the most obvious attacks” 




WordPress Permissions


One the major issues that I have come across which contribute to most WordPress and soon to be server compromises are file and folder permissions. If a developer is doing to the initial installation, I have found that they will open up the permissions to 777 (-rwx, -rwx, -rwx) or maybe a little lower in order to get the work done, but forget to back them off to a more manageable level. In order to manage this better we need something like fastcgi, or php fpm installed on the server. By having one of these in place, the apache user should not be needed to make these sites work.



It is recommended the folder permissions are to be set to no more than 755 (-rwx,-rx,-rx) with the exception of folders inside of the wp-content folder which include uploads, themes, etc. Any folder inside should still be 755 (-rwx,-rx,-rx) but can be safely taken up to 775 (-rwx,-rwx,-rx) as long as the owner and group are not directly related to apache.

File permissions should be no more than 644 (-rw,-r,-r) through out the WordPress instance. There are some exceptions to this such as the .htaccess file and wp-config.php file.

  • Create a phpinfo.php page with the following






  • Check to make sure the fastcgi is installed and runningLook for the Server API Value which should be set to CGI/FastCGI


Once you verify that FastCGI is installed, it is time to make sure that the file and folder permissions are set correctly




Make sure that you get permission from the customer to make the following changes to their server as there may be a potential for any adverse affects to take place.


File Permissions


According to WordPress security, file permissions should be no more than 644 (-rw,-r,-r), below shows how to issue a mass file permissions change as long as apache is not an owner or group of the content.

  • Change directory to the web site document root where the WordPress installation existsBefore making the following change, issue  for i in `find * -type f`; do ls -alh $i; done >> filepermsThe current file permissions are held in the file called fileperms so that if something happens we can do a little magic and get the permissions set backIssue the following to do a mass file permission change: find * -type f -exec chmod -R 644 {} \; Once the change has taken affect, verify by issuing, for i in `find * -type f`; do ls -alh $i; done

Folder Permissions


According to WordPress Security, folder permissions should be no more that 755 (-rwx,-rx,-rx). As mentioned before, there are some folder which need more permissions than other though. The procedures for WordPress folder permissions are very similar to those in the file permissions section above. 

  • Change directory to the web site document root where the WordPress installation existsBefore making the following change, issue  for i in `find * -type d`; do ls -alh $i; done >> folderpermsThe current folder permissions are held in the file called folderperms so that if something happens we can do a little magic and get the permissions set backIssue the following to do a mass folder permission change: find * -type d -exec chmod -R 755 {} \;Once the change has taken affect, verify by issuing, for i in `find * -type d`; do ls -alh $i; done

The difference to this is to adjust other folders now than later. What I have done in the past is the following making sure to not go over 775 (-rwx,-rwx,-rx).

  • Change directory to the wp-content folder within the wordpress installationBefore making the following change, issue  for i in `find * -type d`; do ls -alh $i; done >> wpcontentpermsThe current folder permissions are held in the file called wpcontentperms so that if something happens we can do a little magic and get the permissions set backIssue the following to do a mass folder permission change: find * -type d -exec chmod -R 775 {} \;Once the change has taken affect, verify by issuing, for i in `find * -type d`; do ls -alh $i; done

I have a note that shows another way to handling permissions in a Plesk environment. Making this work Plesk is very easy as but there are some gotchas which can have some adverse affects if not careful. Adding apache to the psacln group within Plesk is a bad idea as it has its own security issues that come along with it. So it was brought up that you may want to change the default umask of Apache to 000 so all files it writes are written with 777 permissions. Then to change the wp-content directory and all directories below it to 777 permissions. 

  • To adjust the wp-content folder to 777 – find wp-content -type d -exec chmod 777 {} \; 

With this done, the FTP user of the site will be able to modify files created by Apache BUT Apache will NOT be able to modify files created or modified by the FTP user.  An attacker will only be able to write files in the wp-content directory but as I have mentioned before, I do not agree with having folders set to a world accessible status 777 (-rwx, -rwx, rwx). Instead, it is best to at least have things locked down to a more stable permissions set of 775 (-rwx, -rwx, -rx) and no higher.

  • To change the wp-content folder to 775 – find wp-content -type d -exec chmod 775 {} \;




WordPress Brute Force Attacks


What you may have noticed is that the apache access logs get filled with information about XMLRPC and wp-login login attempts from different parts of the world. This is common place any more while using WordPress as a CMS. There are some things that can be done in order to make sure that the blogs integrity is still in one piece while mitigating the attacks against the site. 


Most, if not all brute force attacks are automated in nature which means that these are automated by either compromised machines on the internet or scripts that have been kicked off by hackers knowing that a customer is using WordPress as their software of choice.




Make sure that you get permission from the customer to make the following changes to their server as there may be a potential for any adverse affects to take place.


WordPress XMLRPC Access Blocking


The WordPress XMLRPC file is used for API access to the administrative panel as well as access to the site via mobile devices. Lately, this has become a major issue of concern when it comes to WordPress security. The following will show how to stop this issue, at least for the mean time.  The following sends out a global 403 forbidden for any xmlrpc.php access attempt, keep in mind that this can be added on a per domain basis as well but if the customer is not using API and or mobile access, then global blocking is better. This change will take some time to fully work for existing traffic while new traffic will automatically be seeing the 403 information.

  • Edit the apache configuration file located in /etc/httpd/conf/httpd.confAdd the following anywhere in the configuration file. I will usually place it right above the virtual host entries

<IfModule mod_alias.c>

RedirectMatch 403 xmlrpc\.php


  • Save the apache configuration fileRestart the apache service


WordPress wp-login block using Fail2ban


One of the biggest issue so far while looking at the web sites access logs are noticing wp-login brute force attacks that occur from locations all over the world. In order to help mitigate this issue, then there are some steps which need to be taken using applications such as Fail2ban and IPTables. These steps can be used on pretty much any linux platform.

  • Make sure that you have Fail2ban installed and workingEdit the jail file located in /etc/fail2ban/jail.conf and add the following content


    enabled = false
    filter = wordpress-login
    action = iptables[name=WordPressLogin, port=http, protocol=tcp]
    logpath = /var/www/vhosts/*/statistics/logs/access_log
    maxretry = 5
    ban = 86400
  • Save the changes to the jail.conf fileSave the changes to the new WordPress Filter fileRestart the Fail2ban service with /etc/init.d/fail2ban restartCheck the IPTables Firewall to make sure that the wordpress-login jail shows in the list: iptables -L
  • Create a new filter for the WordPress Jail instance the you created above. The new file should be located in/etc/fail2ban/filter.d/wordpress-login.conf
  • # wordpress-login.conf
              before = common.conf

              _daemon = wordpress
              failregex = ^<HOST>\ \-.*\”POST\ \/wp-login.php HTTP\/1\..*\”
              ignoreregex =


Restrict Access To WordPress Admin Panel


In order to make sure that that the customer has the access that they need to work on their WordPress instance even with the Fail2ban jail in place, then we can restrict access to wordpress instance by adding the following to the apache configuration file.


  • Edit the /etc/httpd/conf/httpd.conf and add the followingChange the x.x.x.x to an IP Address given by the customer


<LOCATION /wp-login.php>

order deny,allow

deny from all

# whitelist addresses

allow from x.x.x.x

allow from x.x.x.x

allow from x.x.x.x


  •  Save the /etc/httpd/conf/httpd.conf fileRestart the apache serviceHave the customer test that they can reach and log into their WordPress instance




Related articles









Below is a link to a presentation that I put together for WordPress Security back when I worked at Rackspace Hosting.











Akismet – “Akismet is quite possibly the best way in the world to protect your blog from comment and trackback spam.”


Block Bad Queries – “Protect WordPress Against Malicious URL Requests”


Health Check = “Checks the health of your WordPress install” 


Spam Free WordPress = “Comment spam blocking plugin that uses anonymous password authentication to achieve 100% automated spam blocking with zero false positives”


Ultimate Security Checker = “Security plugin which performs all set of security checks on your WordPress installation”


WordPress File Monitor Plus = “Monitor your website for added/changed/deleted files”


WordPress Firewall 2 = “This WordPress plugin monitors web requests to identify and stop the most obvious attacks”









enabled = false
filter = wordpress-login
action = iptables[name=WordPressLogin, port=http, protocol=tcp]
logpath = /var/www/vhosts/*/statistics/logs/access_log
maxretry = 5
ban = 86400


[root@518485-app1 filter.d]# cat wordpress-login.conf
# wordpress-login.conf
before = common.conf

_daemon = wordpress
failregex = ^<HOST>\ \-.*\”POST\ \/wp-login.php HTTP\/1\..*\”
ignoreregex =


# “ignoreip” can be an IP address, a CIDR mask or a DNS host. Fail2ban will not
# ban a host which matches an address in this list. Several addresses can be
# defined using space separator.
ignoreip =

WordPress Migrations

Hey guys,

I wanted to take this time to map out some information on migrating staging word press sites over to production. These steps are important because as you start building changes to web sites in a staging environment and the proper changes have not been made you could inadvertently overwrite your production web site.




Step 1: Make sure that you have backups of the production and staging word press sites located somewhere off site. You can have these on your local desktop or laptop for now but I would recommend a NAS of some other type of filer for storage.


  • There are backup plugins that get installed from the word press admin panel. The one that I use and have found to work real well is called WPBackItIp. The plugin will create backups of your web site, the themes, the plugins, and anything else that is needed to make the site run. The backups will always be kept on the hosting server that the web site is currently installed on. From that server, you can download the backup files and keep a copy local.


  • UpDraftPlus is another great application for backing up the web site and everything needed to make it run. This plugin will allow your backups to be stored off site on most popular file storage providers. You just need to make sure that you have an account active with a storage provider such as Google Drive, Microsoft Onedrive, Rackspace Cloud, etc. I have this one configured and it even sends me an email once backups have complete and have been transferred. UpDraftPlus pulls a copy of the followings:





Step 2: One thing to consider here is whether you will be using the same database that is used in the staging site with the production site as well. The reason to consider this is that you will need to modify the wp-config.php file with the new database information if changing database. If you plan to use the staging database as production, then you can leave the database setting alone. Below is an example of what the settings look like before modifying for your database connection.





Step 3: You need to copy the content from one location to another. Make sure that what ever FTP client you are using, add the content to the correct /html folder on the server. You need to make sure that you have all of the files including the hidden files (beginning with a . i.e. .htaccess) ready to transfer. The hidden files are important to making WP sites work.

You can download a copy of the staging site to your local machine and reupload it to the new folder location or if on the same server, just copy it from one location to another. If the sites are on the same server, you can use SSH to make the copy by doing the following:

rsync -avP /domain01/html/ domain02/html




cp -a /domain01/html/ /domain02/html/


The rsync command is great for copying content because if something happens to the connection, it normally knows where it left off to continue with the copy. The cp command is the basic linux copy command to send files and folders to other locations on the server.




STEP 4: Make sure that DNS is set correctly. I would set this with a TTL of 300 or as low as it can go in order to make sure that DNS propagation takes place quickly. It is still usually common practice to give more time for propagation to occur. (24 hours) The record that needs to be changed if the site is on a different server will be the A record entry of the domain zone file. If the site is located at a different host, the A record will need to be changed along with the name server records (NS).

Manjaro Cinnamon DE and Intel Video

Hey guys,

I decided to post a question on the Manjaro forums about the issue that I was seeing between Manjaro Cinnamon and the Intel Video drivers that I have installed in my laptop. It was mentioned by a user known as muvvenby to uninstall the xf86-video-intel driver using the mhwd command and see how that works. Well the following commands allowed me to find the driver used by my card and remove it. So far so good as I am able to use simplescreenrecorder to record my laptop screen with out any major issues.


mhwd -li
mhwd -li -d
mhwd -li -d –pci
sudo mhwd -r pci video-intel


  • Remove the intel drivers by using the above commands.
  • It may not be necessary but go ahead and reboot your machine.
  • Install compton in Cinnamon to help with any screen tears if they are showing themselves


You will notice some difference in performance with the compositor but the differences should not make be that drastic. I am able to finally use cinnamon on my laptop with barely if any issues at all at this time. If I see anything drastic, I will post them here.


Below shows an example of a recording that I did yesterday after the changes with the Intel Video drivers. I am using simplescreenrecorder to test this out. So we know that this works with the Intel Video drivers and it works well. The only issue is that there are missing cosmetic stuff no actual application issues that I can see so far. Now the only OS that I have tested this in is Manjaro. I am assuming that the same capabilities will exist in other linux operating systems as well.





There is a user known as jsbach on the Manjaro forum that passed along the following information on this issue as well.


“I removed the Intel driver. I was experiencing screen tearing and other problems. With the modesetting driver everything works perfectly for me (on the three different notebooks). How to do it:”

1) Check:

mhwd -l -d

2) Do

sudo mhwd -r pci video-intel

3) Create /etc/X11/mhwd.d/intel.conf with the following content:

Section "Device"
        Identifier  "Intel Graphics"
        Driver      "modesetting"

4) Reboot.





[kf4bzt@tim-laptop ~]$ mhwd -l -d
> PCI Device: /devices/pci0000:00/0000:00:02.0 (0300:8086:0f31)
Display controller Intel Corporation Atom Processor Z36xxx/Z37xxx Series Graphics & Display

NAME: video-intel
VERSION: 2017.03.12
INFO: X.org intel video driver. Standard open source driver for intel graphic cards.
CONFLICTS: video-hybrid-intel-nvidia-bumblebee video-hybrid-intel-nouveau-bumblebee


NAME: video-intel
VERSION: 2017.03.12
INFO: X.org intel video driver. Standard open source driver for intel graphic cards.
CONFLICTS: video-hybrid-intel-nvidia-bumblebee video-hybrid-intel-nouveau-bumblebee

NAME: video-vesa
VERSION: 2017.03.12
INFO: X.org vesa video driver.





[kf4bzt@tim-laptop ~]$ sudo mhwd -r pci video-intel

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

[sudo] password for kf4bzt:
> Removing video-intel…
Using default
Has lib32 support: true
Sourcing /var/lib/mhwd/local/pci/video-intel/MHWDCONFIG
Processing classid: 0300
Sourcing /var/lib/mhwd/scripts/include/0300
checking dependencies…

Packages (2) libxvmc-1.0.10-1 xf86-video-intel-1:2.99.917+772+gc72bb27a-1

Total Removed Size: 2.29 MiB

:: Do you want to remove these packages? [Y/n]
:: Processing package changes…
removing xf86-video-intel…
removing libxvmc…
:: Running post-transaction hooks…
(1/1) Arming ConditionNeedsUpdate…
‘/etc/X11/xorg.conf.d/90-mhwd.conf’ symlink is invalid! Removing it…
> Successfully removed video-intel







OBRevenge – An Awesome Arch OS

Hey Guys,

I am trying to not be a distro hopper but there are so many different distros out there that it is hard to choose from. Everyone has their own ways of handling common tasks to the way the underlying system works in general. I have become more and more of an Arch user as the apps that I want to use are readily available where in distros such as Ubuntu, and Mint, even though the are nice operating systems, can make it hard to find what I want.


I wanted to do this post as I have started using an awesome OS on my desktop called OBRevenge. This arch based OS is created around the open box desktop environment which appears to have a lot of capabilities built in. Open box, like Mate and XFCE are light weight but still carry a lot of punch. So far, I am highly impressed with how well it works on my system. I am a big Mate fan especially between version 16 and 17 but this is nice for something different to play with.


I took a few screen shots to show some of the main points of the OBRevenge system. All in all it is like most arch releases with some nice addons. The fist screen shot shows what the main screen looks like right now. There are several wallpapers to choose from using an app called nitrogen, but I like the transitions in the one that I have chosen. I have enabled the mate desktop style as this is what I am used to seeing. You can choose from OBR Styles such as Tint2, LXPanel, XFCE4 and Mate. If the dock is not showing, you can add it from a click of a button and use preconfigured layouts.





The docky panel is pre-installed which I think is a great idea. I use a dock all the time to bring my most used apps to the desktop and docky just works and appears to be less resource intensive than some others that I have tried. The developers also integrated an awesome conky display with some shortcut keys to help with some simple everyday items.



The main desktop view:






Nitrogen Wallpaper selector:










OBRevenge has a nice OS Control Panel with some options that will help everyone. The first screen shot is for configuring the panel with such options like a Panel Switcher with will change the panel look to match something that you are used to using as well as changing wallpapers ,etc.





The second tab is for more system related settings such as display, networks and power settings.





The third tab is for software related items. Here you can manage the Mirrorlists, install software of your choice as well as download OBRevenge Wallpapers and work with Software Updates.





The last tab is used for installing things such flash, codecs, nvidia drivers and virtualbox drivers. You can also create a Live USB device from an ISO.





If you click on the System Info button at the bottom, you will be presented with the following terminal screen which will give you information about your system.





And last but not least, if you click on System Monitor, you will be presented with a nice layout of top. This has quite a bit of information to help you troubleshoot potential issues.








The overall performance is fast and efficient and works well with my Acer laptop. That says a lot. There are some things to get used to though. I am not sure if the new thumb drive that I got was having issues initially or just needed to be formatted, but it could not be seem at all. My older thumb drive was working just fine as it was already formatted. I loaded a live media, formatted the new thumbdrive and am able to see it now, but there are still some weird things happening which is non related.


There is quite a bit of room to grow within this operating system and with it being based primarily on Open Box, then it is lighter than most. I really like XFCE and Mate and this fits right in. KDE and Gnome appear to be too heavy on resources. Even though my laptop has 8 gig of ram and a quad core CPU, I am still feeling some pains with KDE and Gnome. I try them from time to time to see what has changed and to be able to keep up with the latest desktop environments.


As you can see from the small video clip that I created below, there is a lot of capability that comes with OBRevenge. I like when I right click, a new set of menus pop up with all of the applications.






One thing of interest that I found which once I get used to using it is a search bar called Albert. Albert is a keyboard launch very similar to the the MacOS Alfred. You can setup a key sequence such as CTRL – SPACE to bring up the search bar. In the search bar, you have access the applications installed on the desktop as well as search engine results. Below in the plugins tab, you can see the available options.


Albert General Tab:




Albert Plugins Tab:






Network Tools

There are several tools within linux to work with network settings and to help find information about the network that you are on. One thing that you will see if that I have hidden the mac address of my stuff here for this tutorial. The reason is that the mac address is considered to be the physical address of your network interface. If was brought up that it is similar to your home address.


Disclaimer: These should not be used to malicious activity and I do not condone and am not responsible for any malicious act committed by any command shown.


  • ifconfig -a – In the example below, the ether name shows the MAC Address assigned to your network interface which is unique to each card. The inet is the network address given to your network interface in an IPV4 format. The inet 6 is also known as IPV6 and is not used by a lot of internet service providers yet.





  • iwconfig – The iwconfig command give information about the wifi network that you are connected to. The Access Point that I marked through is the MAC address of that access point.



  • sudo ifconfig wlp2s0 promisc – To place a wireless interface in promiscuous mode for monitoring your local wifi network, use the ifconfig command shown with the wireless interface. Keep in mind that you need to do this with sudo as you are making changes to the network interface.


  • sudo ifconfig wlp2s0 -promisc – This command will take you out of promiscuous mode and back to normal wifi operations.


Before the change to promiscuous mode:





After the change to promiscuous mode:








  • route command – The route command in linux shows the kernel routing table information. Under flags, the U is showing up while G is showing Gateway. Show UG is an up gateway.



  • route -n – The route with the -n switch changes the host names in the route table is IP Address instead of showing the actual name itself.



  • route add -net default gw gatewayname dev wlp2s0


  • route -Cn – Shows the cache route table for faster network traffic routing. There may not be any cache available so don’t be concerned if you don’t see anything here.




One thing that become an issue is when someone tries to brute force your machine or network. Most companies have way to deter this but what if you are a home user and don’t have the fancy network firewalls and IDS systems? This will help in taking care of the problem.
These notes were something that I had used from time to time while working in the linux hosting industry which work well. If there is a problem IP Address, just nullroute the IP using route command. Lets say that the IP Address causing problem is, just type following command at your command line.
  • route add gw lo
You can verify it with following command:
  • netstat -nr OR route -n
You can also reject target:
  • route add -host IP-ADDRESS reject
  • route add -host reject
To confirm the null routing status, use ip command as follows:
  • ip route get
Output: RTNETLINK answers: Network is unreachable
Drop entire subnet
  • route add -net gw lo
You can also use ip command to null route network or ip, enter:
  • ip route add blackhole
  • route -n
If you would like to remove a null route or a blocked IP Address, just enter the following:
  • route delete






DNS Explained – Part 2 (Tools)

In linux, there are some tools that we use to check what DNS settings that domain name are using. Most linux servers to include Redhat / CentOS / Debian use built in DNS services such as named. The named service is the built in DNS service which control panel such as Plesk and CPanel use to host their DNS settings locally.

Commands Used for DNS Queries:

  • nslookup command – Name Server Lookup Tool for finding the name servers where the zone file is located for the domain you are looking for.




  • dig command – Just using dig with a domain name brings back the IP Address of where the domain lives.





  • whois command – Looks for information about the domain stored at ICANN.





  • host command – The host command is used to do DNS lookups and will convert a domain name to an IP address.









Files used in DNS related queries:


  • /etc/resolv.conf – holds name servers used by server





  • /etc/hosts – holds all host related information. Contains domain names and IP Addresses








Search for domains mail exchanger record:
  • nslookup -type=mx domain.com





  • dig mx google.com 





Search for domains A record:
  • nslookup -type=a domain.com





  • dig a domain.com





Search for domains Name Server record:



  • nslookup -type=ns domain.com





  • dig ns domain.com





Search for domains CNAME record:



nslookup -type=cname domain.com






  • dig cname domain.com





Search for domains SPF record:



  • nslookup -type=spf domain.com





  • dig spf google.com





List All records for a domain:



  • nslookup -type=any domain.com





  • dig google.com any





dig @ domain.com









When migrating zones from GoDaddy, make sure that everything comes across except for the GoDaddy specific entries i.e. domaincontrol.com. Double or even triple check the information to makes sure that everything needed has been added to the /var/named/domain.com.hosts file.
– Verify that all new domains that have been added have the group of named added.
chgrp named /var/named/domain.com.conf
– Verify that the named service configuration file does not have errors.
named-checkconf /etc/named.conf
Also check the domain zone files to make sure that there are no errors.
[root@dns01 named]# named-checkzone directdns.com directdns.com.hosts
zone directdns.com/IN: loaded serial 1389974311
[root@dns01 named]# named-checkzone domain1.com domain1.com.hosts
zone domain1.com/IN: loaded serial 1389974311
– Reload the named service configuration.
[root@dns01 named]# rndc reload
server reload successful
– Restart the named service.

[root@dns01 named]# service named restart
Stopping named: .                                          [  OK  ]
Starting named:                                            [  OK  ]
– Verify the named service status.
[root@dns01 named]# service named status
version: 9.8.2rc1-RedHat-9.8.2-0.23.rc1.el6_5.1 (Not available)
CPUs found: 2
worker threads: 2
number of zones: 48
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/0/1000
tcp clients: 0/100
server is up and running
named (pid  7264) is running…

[root@dns01 ~]# cat /var/named/domain1.com.hosts
$ttl 300
domain1.com.  IN      SOA     dns01.domain2.com. postmaster.domain2.com (
                        38400 )
domain1.com.  IN      NS      dns01.domain2.com.
domain1.com.  IN      NS      dns02.domain2.com.

@                               MX      10      mx.domain1.com.
@                               TXT     “v-spf1 a mx include:subdomain.domain3.com incluide:authsmtp.com ~all”
as                              A
sbam                            A
tc                              A
ald                             A
osi                             A
mx                              A
pd                              A
isi                             A
nald                            A
ldsaving                        A
quasar                          A
sat                             A
conectado                       A
nsb                             A
mlld                            A
lds                             A
ctl                             A
peak                            A
cbs                             A
lld                             A
nlds                            A
dld                             A
dp                              A
bnld                            A
bsa                             A
lda                             A
lcr                             A
ceot                            A
ftp                             CNAME   domain1.com
www                             CNAME   domain1.com

[root@dns01 ~]# cat /var/named/directdns.com.hosts
$ttl 300
directdns.com.      IN      SOA     dns01.domain2.com. postmaster.domain2.com (
                        38400 )
directdns.com.      IN      NS      dns01.domain2.com.
directdns.com.      IN      NS      dns02.domain2.com.

boss                          A
legent                       A
peak                          A
quasar                      A
telecircuit                A
ftp                             CNAME   directdns.com
www                         CNAME   directdns.com

A few web sites for troubleshooting

Manjaro Mate or Ubuntu 17.04 Mate

Hey guys,

As there have been some issues showing up in the Manjaro / Arch realm, it may be time to make a switch in architectures that may be somewhat more stable. I am still checking through some things, and I totally understand that Arch is bleeding edge but sometimes, depending on what we use the OS for, we may need to step back and take another path. I really do like Arch as I have been able to find most if not all of the packages that I want to use in either the arch community or AUS repositories. But there have been a few issues that have started cropping up are as follows.

  • Dependency issues with packages. An example has to do with the winff and ffmpeg. I have started seeing dependency issues showing up during install. Below shows an install that I was trying to do in OBRevenge for the packages WinFF which needs ffmpeg to run. You can easily see the issue that I highlighted.

  • In order to fix the above issue, I had to manually install the ffmpeg-full-git package using yaourt. If yaourt is not installed, do the following.

  • Once yaourt is installed, go ahead and install ffmpeg-full-git using the following

  • Downstream driver issues. There was an issue about a week or so ago which broke a lot of people desktops which contained nvidia video cards. An update was introduced without warning and several machines refused to boot into a gui and screens went black. This is not good at all.
  • Don’t get me wrong, I really do like Manjaro and actually arch in general. I find that it runs much better on my laptop than Ubuntu, but in order to stay with it, I need to figure out how to get past the dependency issues that all of the sudden cropped up. It is possible that they have been there all a long and I am just now noticing them, but who knows. This is something that we need to live with or figure out while working in Arch.




As you can see below, I have a package called pia-nm which appears to be broken via the AUR repository.

It looks like that I did find a potential fix or work around for the package dependency issue that was cropping up in arch. The following help make the install easier if there is a dependency issue. An example of when I had to use these steps was installing PIA VPN. I have not tried this with ffmpeg yet but need to try it out.

  • packer -G packagename
  • cd packagename
  • makepkg -g >> PKGBUILD
  • makepkg
  • sudo pacman -U packagename.pkg.tar.xz

[kf4bzt@tim-laptop ~]$ packer -G pia-nm

[kf4bzt@tim-laptop ~]$ cd pia-nm

[kf4bzt@tim-laptop pia-nm]$ makepkg -g >> PKGBUILD
==> Retrieving sources…
-> Downloading ca.rsa.4096.crt…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2719 100 2719 0 0 10884 0 –:–:– –:–:– –:–:– 10876
-> Downloading servers…
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9431 100 9431 0 0 23204 0 –:–:– –:–:– –:–:– 23229
-> Found process_servers
==> Generating checksums for source files…

[kf4bzt@tim-laptop pia-nm]$ makepkg
==> Making package: pia-nm 24-1 (Tue Apr 18 17:08:29 CDT 2017)
==> Checking runtime dependencies…
==> Checking buildtime dependencies…
==> Retrieving sources…
-> Found ca.rsa.4096.crt
-> Found servers
-> Found process_servers
==> Validating source files with sha512sums…
ca.rsa.4096.crt … Passed
servers … Passed
process_servers … Passed
==> Extracting sources…
==> Starting prepare()…
PIA username (pNNNNNNN): Enter username here
==> Entering fakeroot environment…
==> Starting package()…
==> Tidying install…
-> Removing libtool files…
-> Purging unwanted files…
-> Removing static library files…
-> Stripping unneeded symbols from binaries and libraries…
-> Compressing man and info pages…
==> Checking for packaging issue…
==> Creating package “pia-nm”…
-> Generating .PKGINFO file…
-> Generating .BUILDINFO file…
-> Generating .MTREE file…
-> Compressing package…
==> Leaving fakeroot environment.
==> Finished making: pia-nm 24-1 (Tue Apr 18 17:09:21 CDT 2017)

[kf4bzt@tim-laptop pia-nm]$ sudo pacman -U pia-nm-24-1-x86_64.pkg.tar.xz
loading packages…
resolving dependencies…
looking for conflicting packages…

Packages (1) pia-nm-24-1

Total Installed Size: 0.04 MiB

:: Proceed with installation? [Y/n] y
(1/1) checking keys in keyring [######################] 100%
(1/1) checking package integrity [######################] 100%
(1/1) loading package files [######################] 100%
(1/1) checking for file conflicts [######################] 100%
(1/1) checking available disk space [######################] 100%
:: Processing package changes…
(1/1) installing pia-nm [######################] 100%




The issue with Ubuntu is that not all packages are available and you either have to find PPA’s or download directly from the developers site. This can be a pain in the rear when you need something right then. Luckily, I haven’t ran into the issue of needing something yesterday.


As several apps are not available in the repository, below are links to some that I use.

Wavebox (Replacement for wmail) – https://wavebox.io/download

kaption (Was able to install in Manjaro and OBRevenge, but requires certain KDE files in Ubuntu to be able to install) – https://www.linux-apps.com/content/show.php/Kaption?content=139302

slack – https://slack.com/downloads/linux

zoom – https://www.zoom.us/download

angryip – http://angryip.org/download/#linux

etcher – https://etcher.io



Free Certificates Through letsencrypt.org

One thing that I found cool while in training is how SSL certificates could be going free with a service called letsencrypt. The paid certs are still around $75 a year which is not bad at all, but for us that don’t have the funds to spend or don’t have secure content the free SSL is a great way to go. The certificates need to be renewed every 6 months but it is still the way to go when saving customers money with their web hosting packages. Some customers would rather use paid SSL services when they have some major secure connection, this may not be worth it. it is up to the customer.


The links below take you to the content for an awesome project








Below is from the certbot documentation on installing this upon different platforms:


Operating System Packages

Arch Linux

sudo pacman -S certbot


If you run Debian Stretch or Debian Sid, you can install certbot packages.

sudo apt-get update
sudo apt-get install certbot python-certbot-apache

If you don’t want to use the Apache plugin, you can omit the python-certbot-apache package.

Packages exist for Debian Jessie via backports. First you’ll have to follow the instructions at http://backports.debian.org/Instructions/ to enable the Jessie backports repo, if you have not already done so. Then run:

sudo apt-get install certbot python-certbot-apache -t jessie-backports


sudo dnf install certbot python2-certbot-apache


  • Port: cd /usr/ports/security/py-certbot && make install clean
  • Package: pkg install py27-certbot


The official Certbot client is available in Gentoo Portage. If you want to use the Apache plugin, it has to be installed separately:

emerge -av app-crypt/certbot
emerge -av app-crypt/certbot-apache

When using the Apache plugin, you will run into a “cannot find a cert or key directive” error if you’re sporting the default Gentoo httpd.conf. You can fix this by commenting out two lines in /etc/apache2/httpd.conf as follows:


<IfDefine SSL>
LoadModule ssl_module modules/mod_ssl.so


#<IfDefine SSL>
LoadModule ssl_module modules/mod_ssl.so

For the time being, this is the only way for the Apache plugin to recognise the appropriate directives when installing the certificate. Note: this change is not required for the other plugins.


  • Build from source: cd /usr/pkgsrc/security/py-certbot && make install clean
  • Install pre-compiled package: pkg_add py27-certbot


  • Port: cd /usr/ports/security/letsencrypt/client && make install clean
  • Package: pkg_add letsencrypt

Other Operating Systems

OS packaging is an ongoing effort. If you’d like to package Certbot for your distribution of choice please have a look at the Packaging Guide.





The following example are for a Debian 8 server that I have. Make sure that you have port 443 open and accessible.


root@timknowsstuff-vm:~# sudo apt-get install python-certbot-apache -t jessie-backports

root@timknowsstuff-vm:~# a2enmod ssl 
Considering dependency setenvif for ssl: 
Module setenvif already enabled 
Considering dependency mime for ssl: 
Module mime already enabled 
Considering dependency socache_shmcb for ssl: 
Enabling module socache_shmcb. 
Enabling module ssl. 
See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates. 
To activate the new configuration, you need to run: service apache2 restart
root@timknowsstuff-vm:~# a2ensite default-ssl
Enabling site default-ssl.
To activate the new configuration, you need to run:
  service apache2 reload
root@timknowsstuff-vm:~# systemctl restart apache2
root@timknowsstuff-vm:~# netstat -paunt | grep apache2
tcp        0      0   *               LISTEN      31195/apache2   
tcp        0      0    *               LISTEN      31195/apache2   

root@timknowsstuff-vm:~# certbot --apache


Below shows the options within the certbot application:

root@timknowsstuff-vm:~# certbot ?
  certbot [SUBCOMMAND] [options] [-d domain] [-d domain] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates.  By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:

  (default) run        Obtain & install a cert in your current webserver
  certonly             Obtain cert, but do not install it (aka "auth")
  install              Install a previously obtained cert in a server
  renew                Renew previously obtained certs that are near expiry
  revoke               Revoke a previously obtained certificate
  register             Perform tasks related to registering with the CA
  rollback             Rollback server configuration changes made during install
  config_changes       Show changes made to server config during installation
  plugins              Display information about installed plugins

DNS Explained – Part One

During a training session yesterday, we had a presentation about DNS that made perfect since. Here are some points that came out of the training which I think everyone can use.


-What does DNS stand for? Depending on who you Goggle or ask it usually will be Domain Name Service


-What does DNS do? DNS connects the domain name to an IP Address


-DNS is like the phone book of the internet. When a query is made on a domain name, the search is trying to find the IP Address associated with the domain name. This is similar to your cell phone contacts list. You see a list a contact which point to a phone number to make contact.


-ICANN is the master DNS system – They run how the DNS works


-The reason for needing access to DNS when hosting a web site or application is that there is a possibility that your IP Address may change and you need to make sure that there is no downtime, or the least amount of downtime possible,


-What is a URL? A URL has a protocol such as http, https, ftp. These tell what type of communications that you are trying to accomplish such as http – unsecure web traffic, https – secure web traffic, ftp – file transfer.


-What is a Subdomain? A subdomain can be broken down into smaller parts for the parent domain name. If you look in a DNS control panel, you will see designations such as www, mail, store, docs, etc. These are considered subdomains as they point to other sections or pages of the parent domain.


-What is a Top Level Domain (TLD)? The top level domain information is basically the last part of the domain name. For example, .edu, .com, .net, and .org. These represent what type of site that you have created.


-http://www.google.com/search = URL
-http://search.google.com = subdomain
-.edu, .com, .net, .org = tld (Top Level Domain)


-What are DNS resolvers? DNS resolvers do the phone book lookup which takes the domain name and locates the IP Address that is assigned to that domain name.


-What are name servers? The name servers are used to do the queries to locate the IP Address of the website. Name servers use zone files which include the IPAddress and where it needs to go.


-What are some of the DNS Record types used?


An A-record (address record) maps a hostname to an IP Address.



An AAAA-record (address record) maps a hostname to an IPv6 Address.



A CNAME (canonical name) record maps a host name to another hostname or FQDN.

-**A CNAME is NOT a redirect. It is an alias**

-**Do Not CNAME a parent domain. You will break the zone file.**



A MX record is the mail exchanger record which maps the domain to a particular address with a priority. The lower the priority number, i.e. 10, 20, 30, etc. the higher the priority that the exchanger has.



A TXT (text) record is used to hold some text information. You can put virtually any free text you want within a TXT record. A TXT record has a hostname so that you can assign the free text to a particular hostname/zone. The most common use for TXT records is to store SPF (sender policy framework) records and to prevent emails being faked to appear to have been sent from you.



An NS (name server) record allows you to delegate a subdomain of your domain to another name server.



An SPF record is a Sender Policy Framework record. An SPF record is actually a specific type of TXT record.



An SPF record is used to stop people receiving forged email. By adding an SPF record into your DNS configuration any mail servers receiving email, that is allegedly from you, will check that the email has come from a trusted source. The trusted sources are provided by the SPF record that you set up.




Use dig with a DNS server IP. In the example I used Google to do a search.






Below is a quick how to on how DNS moves its information from the browser to the hosting server:


-1. Type domain name into browser
-2. Browser does not know IP of domain name so it looks at the resolver for information
-3. Resolver talks to a bunch of NAME Servers until it finds the one that has a ZONE FILE for the domain name.
-4. The resolver reads the ZONE FILE to learn the IP ADDRESS of the domain name
-5. The RESOLVER then tells my computer/browser the IP ADDRESS for the domain name
-6. Apache is read and the content is sent back to the local browser.


So basically, it was explained very simply with the following:

When you go to a web site, the domain name needs to be registered. Once registered, there will need to be name server entries added at the registrar showing where the domain lives.


Registrar  –>  Name Servers  –>  Zone File  –>  IP Address





TTL – Time To Live:

The TTL tells the browser how long it must keep the web site information until it goes back out for new web site content. The TTL can be set from 5 min to 24 hours depending on the provider and if you need a change to go quickly, set that level to the lowest it can go. By setting the lower you can also see a greater load on the DNS side. The TTL change is done within the zone file.

Domain Name Registrar:

The domain name registrar is used to store and retrieve information about a domain name such as contact information about the owner and when the domain name will expire. This information is pulled and sent to ICANN as well.




Here is a brief description on the DNS Resolution process:

– Each domain name has a name server attached in order for internet browsers to find the correct location of the domain.

– Each domain contains an IP Address which is given at the server side that the web service lives on.

– At the registrar of the domain, the name servers are added as ns, ns1, ns2 etc while the domain name to IP address is added as an A record.

– When an application needs to resolve the domain name, it looks at the name servers to be able to resolve the information. For example, in linux, the nslookup command is used to resolve the name and IP address.

– Basically from the client side, you type in a browser, the domain name you are wanting to visit. The browser will check the local or client resolver which will be cached data. The local cached data may come from a local hosts file or bind services.

– If the client side does not get anything back, the client will question a preferred DNS server which will include the ns.domain.com, ns2.domain.com, etc. When the DNS server gets a query, it will check its local zone files to see if it can give an answer back. If it can not find the information needed in the local zone files, it will go to the local cached data to see what it can find. If the DNS servers can not complete the query, it will try to do a recursive search to fully resolve the domain name.