A Unique Situation for grep (finding the files with content matching a specific pattern Linux)

This article explains how to find all the files that have a specific text or pattern within them, this is the article you’ve been looking for!

So today, I was dealing with a customers server where he had tried to configure BASIC AUTH. I’d found the httpd.conf file for the specific site, but I couldn’t see which file had basic auth setup as wrong. To save me looking through hundreds of configurations (and also to save YOU from looking through hundreds of configuration files) for this specific pattern. Why not use grep to recursively search files for the pattern, and why not use -n to give the filename and line number of files which have text in that match this pattern.

I really enjoyed this oneliner, and been meaning to work to put something like this together, because this kind of issue comes up a lot, and this can save a lot of time!

 grep -rnw '/' -e "PermitRootLogin"

# OUTPUT looks like

/usr/share/vim/vim74/syntax/sshdconfig.vim:157:syn keyword sshdconfigKeyword PermitRootLogin
/usr/share/doc/openssh-5.3p1/README.platform:37:instead the PermitRootLogin setting in sshd_config is used.

The above searches recursively all files in the root filesystem ‘/’ looking for PermitRootLogin.

I wanted to find which .htaccess file was responsible so I ran;

# grep -rnw '/' -e "/path/to/.htpasswd'

# OUTPUT looks like
/var/www/vhosts/somesite.com/.htaccess:14:AuthUserFile /path/to/.htpasswd

Locking down WordPress Permissions

So, wordpress sites do not need chmod 777, as some customers do use. Traditionally, you will want to create permissions in accordance with this document:

https://codex.wordpress.org/Hardening_WordPress#File_Permissions

The most important pieces are chmod for folders and chmod for files using find to do this en-masse

D for directories

find /path/to/your/wordpress/install/ -type d -exec chmod 755 {} \;

F for files

find /path/to/your/wordpress/install/ -type f -exec chmod 644 {} \;

Troubleshooting Akamai CDN using Pragma Headers

Rackspace Cloud Files CDN enabled containers and the Rackspace CDN product can occasionally have an issue, in such cases, you can troubleshoot this a lot easier by using the DEBUG headers. To do this add -D and the folowing -H headers for your output

curl -I http://rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com/common/test/generator.png -D - -H "Pragma: akamai-x-get-client-ip, akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-check-cacheable, akamai-x-get-cache-key, akamai-x-get-extracted-values, akamai-x-get-nonces, akamai-x-get-ssl-client-session-id, akamai-x-get-true-cache-key, akamai-x-serial-no, akamai-x-feo-trace, akamai-x-get-request-id" -L

Naturally for this to work, you need to make sure you have a test URL. You can get one from Cloud Files in the Rackspace Control panel, or from whatever origin is configured with the CDN. Just login to the origin, and find a file path like httpdocs/somefolder/somefile.png and then get the raxcdn.com URL from the Rackspace CDN product page, or the Rackspace Cloud Files ‘show all links’ page on the cog icon next to the cloud files container. And add the path behind your documentroot.

So for the CDN URL rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com

For the files on the origin in the documentroot of the website configured with the CDN just add the ‘local’ document path within httpdocs or your www folder!

i.e. rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com becomes rackspacecdnurlgoeshereprivatecensored.r17.cf3.rackcdn.com/somefolder/somefile.png

Using omconfig to add a RAID 1 device for a Perc 6/i Dell Raid Controller

So, I’ve been provisioning disks, and stuff recently.. this is how I did it on a Dell. Quite an easy thing to do!

omconfig storage controller action=createvdisk controller=0 raid=r1 size=max readpolicy=ara pdisk=0:0:2,0:0:3
Command successful!

In this case the two disks newly added were 0:0:2 and 0:0:3 on the SAS ‘bus’.

An additional primary partition was created and added for this device sdb1, and a filesystem of the same kind (ext3) as the system disk was created;

 mkfs.ext3 /dev/sdb1

mke2fs 1.39 (29-May-2006)
....
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

You will naturally need to mount the partition and create an fstab entry to make this permanent;

mount /dev/sdb1 /mnt/backup

echo "/dev/sdb1               /mnt/backup             ext3    defaults        1 1" >> /etc/fstab

You may wish to consider adding the above to fstab manually. It’s not a good idea using echo with it incase you make a mistake ;-D

Cheers &
Best wishes,
Adam

Mitigating the Dirty Cow vulnerability in CentOS, RedHat, Ubuntu, Debian and Opensuse

How to fix Dirty Cow vulnerability in CentOS, RedHat, Ubuntu, Debian, CloudLinux and OpenSuse Linux servers

Dirty COW vulnerability was first discovered a decade ago and has been present in Linux kernel versions from 2.6.22, which was released in 2007.

But the vulnerability gained attention only recently when hackers started exploiting it. This has led to the release of this bug as CVE-2016-5195 on October 19th, 2016.
What is Dirty Cow vulnerability (CVE-2016-5195)?

CVE-2016-5195 aka “Dirty COW vulnerability” involves a privilege escalation exploit which affects the way memory operations are handled.

Since the feature that is affected by this bug is the copy-on-write (COW) mechanism in Linux kernel for managing ‘dirty’ memory pages, this vulnerability is termed ‘Dirty COW’.

Misusing this flaw in kernel, an unprivileged local user can escalate his privileges in the system and thus gain write access on read-only memory updates.

Using this privilege escalation, local users can write to any file that they can read. Any malicious application or user can thus tamper with critical read-only root-owned files.
Is Dirty Cow vulnerability (CVE-2016-5195) critical?

Dirty COW vulnerability affects the Linux kernel. Most open-source operating systems such as RedHat, Ubuntu, Fedora, Debian, etc. are based over Linux kernel.

As a result, this vulnerability is a ‘High’ priority one as it can affect a huge percentage of servers running over Linux and Android kernels.

CVE-2016-5195 exploit can be misused by malicious users who are provided with shell access in Linux servers. They can gain root access and attack other users.

When combined with other attacks such as SQL injection, this privilege escalation attack can even mess up the entire data in these servers, which makes it a critical one.
Are you servers affected by Dirty Cow exploit?

If your server or VM or container is hosted with any of these OS versions, then they are vulnerable:

Red Hat Enterprise Linux 7.x, 6.x and 5.x

CentOS Linux 7.x, 6.x and 5.x

Debian Linux wheezy, jessie, stretch and sid

Ubuntu Linux precise (LTS 12.04), trusty, xenial (LTS 16.04), yakkety and vivid/ubuntu-core

SUSE Linux Enterprise 11 and 12

For more information about mitigating yourself against Dirty cow, please see:

How to fix Dirty Cow vulnerability in CentOS, RedHat, Ubuntu, Debian, CloudLinux and OpenSuse Linux servers

Credit to Reeshma Mathews from bobcares.com for this

Resizing PVHVM flavours Down via API

So, in the case you have a cloud-server, such as a Rackspace standard flavour instance, you might want to resize down.

The only problem is that when doing this it can result in the MBR being lost. Check out this great article from Jake Coe, explaining how it can be achieved.

https://community.rackspace.com/products/f/25/t/4716?_ga=1.244296325.1822315901.1458550977

Because of the sizes of larger vhds it will not let you downsizes these devices within the control panel. There may be a way to downsize them IF you are not using more then the required space on the server however there are some risks involved and once the downsize is completed there will be a manual fixed needed for the servers to get them to boot.

I would highly suggest that the following be attempted on a non production server as dataloss could happen.

Create an image of existing server you want to downsize.

Once the image is completed we can modify the min disk and ram to that of the smaller size needed.

On the new server once built to the larger size you will 1st need to write all zeros to empty space.

cat /dev/zero > /zero; rm -f /zero

Once this is done you will then downsize the server to the desired size.

Once the downsize is completed if this is a pvhvm instance the mbr will be gone and grub will need to be reinstalled

Configuring SFTP without chroot (the easy way)

So, I wouldn’t normally recommend this to customers. However, there are secure ways to add SFTP access, without the SFTP subsystem having to be modified. It’s also possible to achieve similar setup in a location like /home/john/public_html.

Let’s assume that public_html and everything underneath it is chowned john:john. So john:john has all the access, and apache2 runs with it’s own gid;uid. This was a pretty strange setup, and you don’t see it every day. But actually, it allowed me to solve another problem that I’ve been seeing/seeing customers have for a long time. That problem is the problem of effectively and easily managing permissions. Once I figured this out it was a serious ‘aha!’ moment!. Here’s why.

Inside the /etc/group, we find the customers developer has done something tragic:

[root@web public_html]# cat /etc/group | grep apache
apache:x:48:john,bob

But fine.. we’ll run with it.

We can see all the files inside their /home/john/public_html , the sight is not good

]# ls -al 
total 232
drwxrwxr-x 27 john john  4096 Dec 20 15:56 .
drwxr-xr-x 12 john john  4096 Dec 15 11:08 ..
drwxrwxr-x 10 john john  4096 Dec 16 09:56 administrator
drwxrwxr-x  2 john john  4096 Dec 14 11:18 bin
drwxrwxr-x  4 john john  4096 Nov  2 15:05 build
-rw-rw-r--  1 john john   714 Nov  2 15:05 build.xml
drwxrwxr-x  3 john john  4096 Nov  2 15:05 c
drwxrwxr-x  3 john john 45056 Dec 20 13:09 cache
drwxrwxr-x  2 john john  4096 Dec 14 11:18 cli
drwxrwxr-x 32 john john  4096 Dec 14 11:18 components
-rw-rw-r--  1 john john  1863 Nov  2 15:05 configuration-live.php
-rw-r--r--  1 john john  3173 Dec 15 11:08 configuration.php
drwxrwxr-x  3 john john  4096 Nov  2 15:05 docs
drwxrwxr-x  8 john john  4096 Dec 16 17:17 .git
-rw-rw-r--  1 john john  1734 Dec 14 11:21 .gitignore

It gets worse..

# cat /etc/passwd | grep john
john:x:501:501::/home/john:/bin/sh

Now, adding an sftp user into this, might look like a nightmare, but actually with some retrospective thought it was really easy.

Solving this mess:

Install Scponly

yum install scponly

Create new ‘SFTP’ user:

scponlyuser:x:504:505::/home/john:/usr/bin/scponly

Create a password for user scponlyuser

 
passwd scponlyuser

Solution to john:john permissions

[root@web public_html]# cat /etc/group | grep john
apache:x:48:john,bob
john:x:501:scponlyuser

We simply make scponlyuser part of the john group by adding the second line there. That way, the scponlyuser will have read/write access to the same files as the shell user, without exposing any additional stuff.

This was a cool solution to fixing this customers insecure solution, that they wanted to keep it the way they had, and was also great way to add an sftp account without requiring root jail. Whether it’s better than the root jail, is really debatable, however scponly enforces that only this account can be used only for SCP, as well as achieving sftp user access, without a jail.

I was proud of this achievement.. goes to show Linux permissions are really more flexible than we can imagine. And, whether you really want to flex those permissions muscles though, should be of concern. I advised this customer to change this setup, remove the /bin/sh, among other things..

We finally test SFTP is working as expected with the new scponlyuser


sftp> rmdir test
sftp> get index.php
Fetching /home/john/public_html/index.php to index.php
/home/john/public_html/index.php                                                                                     100% 1420     1.4KB/s   00:00
sftp> put index.php
Uploading index.php to /home/john/public_html/index.php
index.php                                                                                                                100% 1420     1.4KB/s   00:00
sftp> mkdir test
sftp> rmdir test

Just replace ‘scponly’ with whatever username your setting up. The only part that you need to keep the ‘scponly’ bit, is /usr/bin/scponly, this is the environment logging into. Apologies that scponly is so similar to scponlyuser ;-D

scponlyuser:x:504:505::/home/john:/usr/bin/scponly

I was very pleased with this! Hope that you find this useful too!

Block all the IP’s from country

So, I wrote a nice little one liner for one of our customers that wanted to blanket ban Russia (even though I said it wasn’t a good idea, or marginally effective to stop attacks). Might help with spam or other stuff though, and anyway, the customer is always ‘wrong’, it’s up to us to make sure that they do it wrongly right. ;-D

curl http://www.ipdeny.com/ipblocks/data/countries/ru.zone -o russia_ips_all.txt; cat russia_ips_all.txt | xargs -i echo /sbin/iptables -I INPUT -s {} -j DROP

Here is how I achieved it above. This bans all the IP’s from russia. But, if you aren’t very equal opportunities :(, you can ban all kinds of countries:

http://www.ipdeny.com/ipblocks/

Just take a look at this, and change the url, as such. It doesn’t matter what the variables say (even if they say russia, just change the url directly after curl). For instance

http://www.ipdeny.com/ipblocks/data/countries/pl.zone -o ips_all.txt; cat ips_all.txt | xargs -i echo /sbin/iptables -I INPUT -s {} -j DROP

I was really quite happy with this little oneliner. 😀

Cheers &
Best wishes,
Adam

Upgrading Ubuntu 12.04 to 14.04 when getting 404 not found for repo links

A customer had this issue with their very old Ubuntu 12.04 machine.

Use this link to generate a new sources.list, and populate your /etc/apt/sources.list with that detail. Backup the current sources.list you have first.

https://repogen.simplylinux.ch/

Then to update with the new repo, use:

apt-get update
apt-get dist-upgrade

Please note that going from Ubuntu 12.04 to 14.04 isn’t something I’d recommend you do if you don’t know what your doing, certainly expect the possibility that the box might not come back up.

Try populating your /etc/apt/sources.list with this:

#——————————————————————————#
# OFFICIAL UBUNTU REPOS #
#——————————————————————————#

###### Ubuntu Main Repos
deb http://uk.archive.ubuntu.com/ubuntu/ precise main
deb-src http://uk.archive.ubuntu.com/ubuntu/ precise main

###### Ubuntu Update Repos
deb http://uk.archive.ubuntu.com/ubuntu/ precise-security main
deb-src http://uk.archive.ubuntu.com/ubuntu/ precise-security main

Failing that, delete the sources.list and put this instead in it:

#——————————————————————————#
# OFFICIAL UBUNTU REPOS #
#——————————————————————————#

###### Ubuntu Main Repos
deb http://uk.archive.ubuntu.com/ubuntu/ trusty main
deb-src http://uk.archive.ubuntu.com/ubuntu/ trusty main

###### Ubuntu Update Repos
deb http://uk.archive.ubuntu.com/ubuntu/ trusty-security main
deb-src http://uk.archive.ubuntu.com/ubuntu/ trusty-security main