Setting X-Frame-Options HTTP Header to allow SAME or NON SAME ORIGINS

It’s possible to increase the security of a webserver running a website, by ensuring that the X-FRAME-OPTIONS header pushes a header to the browser, which enforces the origin (server) serving the site. It prevents the website then providing objects which are not local to the site, in the stream. An admirable option for those which wish to increase their server security.

Naturally, there are some reasons why you might want to disable this, and in proper context, it can be secure. Always be sure to discuss with your pentester or PCI compliance officer, such considerations before proceeding, especially making sure that if you do not want to use SAME ORIGIN you always use the most secure option for the required task. Always check if there is a better way to achieve what your trying to do, when making such changes to your server configuration.

Insecure X-Frame-Option allows remote non matching origins

Header always append X-Frame-Options ALLOWALL

Secure X-Frame-Option imposes on the browser to not allow non origin(al) connections for the domain, which can prevent clickjack and other attacks.

Header always append X-Frame-Options SAMEORIGIN

QID 150004 : Path-Based Vulnerability

A customer of ours had an issue with some paths like theirwebsite.com/images returning a 200 OK, and although the page was completely blank, and exposed no information it was detected as a positive indicator of exposed data, because of the 200 OK.

more detail: https://community.qualys.com/thread/16746-qid-150004-path-based-vulnerability

Actually in this case it was a ‘whitescreen’, or just a blank index page, to prevent the Options +indexes in the apache httpd configuration showing the images path. You probably don’t want this and can just set your Option indexes.

Change from:

Options +Indexes
# in older versions it may be defined as
Options Indexes

Change to:

Options -Indexes

This explicitly forbids, but older versions of apache2 might need this written as:

Options Indexes

To prevent an attack on .htaccess you could also add this to httpd.conf to ensure the httpd.conf is enforced and takes precedence over any hacker or user that adds indexing incorrectly/mistakenly/wrongly;

<Directory />
    Options FollowSymLinks
    AllowOverride None
</Directory>

Simple enough.

Tracing Down Network and Process Traffic Using Netfilter

Every now and then at Rackspace, as with any hosting provider. We do occasionally have issues where customers have left themselves open to attack. In such cases sometimes customers find their server is sending spam email, and is prone to other malware occurring on the Rackspace Network.

Due to AUP and other obligations, it can become a critical issue for both the uptime, and reputation of your site. In many cases, customers do not necessarily have forensic experience, and will struggle very hard to remove the malware. In some cases, the malware keeps on coming back, or, like in my customers case, you could see lots of extra network traffic still using tcpdump locally on the box.

Enter, netfilter, part of the Linux Kernel, and it is able, if you ask it, to track down where packets are coming from, on a process level. This is really handy if you have an active malware or spam process on your system, since you can find out exactly where it is, before doing more investigation. Such a method, also allows you to trace down any potential false positives, since the packet address is always included, you get a really nice overview.

To give you an idea, I needed to install a kernel with debuginfo, just to do this troubleshooting, however this depends on your distribution.

Updating your Kernel may be necessary to use netfilter debug

$yum history info 18

Transaction performed with:
    Installed     rpm-4.11.3-21.el7.x86_64                               @base
    Installed     yum-3.4.3-150.el7.centos.noarch                        @base
    Installed     yum-plugin-auto-update-debug-info-1.1.31-40.el7.noarch @base
    Installed     yum-plugin-fastestmirror-1.1.31-40.el7.noarch          @base
Packages Altered:
    Updated kernel-debuginfo-4.4.40-202.el7.centos.x86_64               @base-debuginfo
    Update                   4.4.42-202.el7.centos.x86_64               @base-debuginfo
    Updated kernel-debuginfo-common-x86_64-4.4.40-202.el7.centos.x86_64 @base-debuginfo
    Update                                 4.4.42-202.el7.centos.x86_64 @base-debuginfo

You could use a similar process using netfilter.ip.local_in, I suspect.

The Script

#! /usr/bin/env stap

# Print a trace of threads sending IP packets (UDP or TCP) to a given
# destination port and/or address.  Default is unfiltered.

global the_dport = 0    # override with -G the_dport=53
global the_daddr = ""   # override with -G the_daddr=127.0.0.1

probe netfilter.ip.local_out {
    if ((the_dport == 0 || the_dport == dport) &&
        (the_daddr == "" || the_daddr == daddr))
            printf("%s[%d] sent packet to %s:%d\n", execname(), tid(), daddr, dport)
}

Executing the Script

[root@pirax-test-new hacked]# chmod +x dns_probe.sh
[root@pirax-test-new hacked]# ./dns_probe.sh
Missing separate debuginfos, use: debuginfo-install kernel-3.10.0-514.2.2.el7.x86_64
swapper/3[0] sent packet to 78.136.44.6:0
sshd[25421] sent packet to 134.1.1.1:55336
sshd[25421] sent packet to 134.1.1.1:55336
swapper/3[0] sent packet to 78.136.44.6:0

I was a little bit concerned about the above output, it looks like swapper with pid 3, is doing something it wouldn’t normally do. Upon further inspection though, we find it is just the outgoing cloud monitoring call;

# nslookup 78.136.44.6
Server:		83.138.151.81
Address:	83.138.151.81#53

Non-authoritative answer:
6.44.136.78.in-addr.arpa	name = collector-lon-78-136-44-6.monitoring.rackspacecloud.com.

Authoritative answers can be found from:

A Unique Situation for grep (finding the files with content matching a specific pattern Linux)

This article explains how to find all the files that have a specific text or pattern within them, this is the article you’ve been looking for!

So today, I was dealing with a customers server where he had tried to configure BASIC AUTH. I’d found the httpd.conf file for the specific site, but I couldn’t see which file had basic auth setup as wrong. To save me looking through hundreds of configurations (and also to save YOU from looking through hundreds of configuration files) for this specific pattern. Why not use grep to recursively search files for the pattern, and why not use -n to give the filename and line number of files which have text in that match this pattern.

I really enjoyed this oneliner, and been meaning to work to put something like this together, because this kind of issue comes up a lot, and this can save a lot of time!

 grep -rnw '/' -e "PermitRootLogin"

# OUTPUT looks like

/usr/share/vim/vim74/syntax/sshdconfig.vim:157:syn keyword sshdconfigKeyword PermitRootLogin
/usr/share/doc/openssh-5.3p1/README.platform:37:instead the PermitRootLogin setting in sshd_config is used.

The above searches recursively all files in the root filesystem ‘/’ looking for PermitRootLogin.

I wanted to find which .htaccess file was responsible so I ran;

# grep -rnw '/' -e "/path/to/.htpasswd'

# OUTPUT looks like
/var/www/vhosts/somesite.com/.htaccess:14:AuthUserFile /path/to/.htpasswd

Locking down WordPress Permissions

So, wordpress sites do not need chmod 777, as some customers do use. Traditionally, you will want to create permissions in accordance with this document:

https://codex.wordpress.org/Hardening_WordPress#File_Permissions

The most important pieces are chmod for folders and chmod for files using find to do this en-masse

D for directories

find /path/to/your/wordpress/install/ -type d -exec chmod 755 {} \;

F for files

find /path/to/your/wordpress/install/ -type f -exec chmod 644 {} \;

Mitigating the Dirty Cow vulnerability in CentOS, RedHat, Ubuntu, Debian and Opensuse

How to fix Dirty Cow vulnerability in CentOS, RedHat, Ubuntu, Debian, CloudLinux and OpenSuse Linux servers

Dirty COW vulnerability was first discovered a decade ago and has been present in Linux kernel versions from 2.6.22, which was released in 2007.

But the vulnerability gained attention only recently when hackers started exploiting it. This has led to the release of this bug as CVE-2016-5195 on October 19th, 2016.
What is Dirty Cow vulnerability (CVE-2016-5195)?

CVE-2016-5195 aka “Dirty COW vulnerability” involves a privilege escalation exploit which affects the way memory operations are handled.

Since the feature that is affected by this bug is the copy-on-write (COW) mechanism in Linux kernel for managing ‘dirty’ memory pages, this vulnerability is termed ‘Dirty COW’.

Misusing this flaw in kernel, an unprivileged local user can escalate his privileges in the system and thus gain write access on read-only memory updates.

Using this privilege escalation, local users can write to any file that they can read. Any malicious application or user can thus tamper with critical read-only root-owned files.
Is Dirty Cow vulnerability (CVE-2016-5195) critical?

Dirty COW vulnerability affects the Linux kernel. Most open-source operating systems such as RedHat, Ubuntu, Fedora, Debian, etc. are based over Linux kernel.

As a result, this vulnerability is a ‘High’ priority one as it can affect a huge percentage of servers running over Linux and Android kernels.

CVE-2016-5195 exploit can be misused by malicious users who are provided with shell access in Linux servers. They can gain root access and attack other users.

When combined with other attacks such as SQL injection, this privilege escalation attack can even mess up the entire data in these servers, which makes it a critical one.
Are you servers affected by Dirty Cow exploit?

If your server or VM or container is hosted with any of these OS versions, then they are vulnerable:

Red Hat Enterprise Linux 7.x, 6.x and 5.x

CentOS Linux 7.x, 6.x and 5.x

Debian Linux wheezy, jessie, stretch and sid

Ubuntu Linux precise (LTS 12.04), trusty, xenial (LTS 16.04), yakkety and vivid/ubuntu-core

SUSE Linux Enterprise 11 and 12

For more information about mitigating yourself against Dirty cow, please see:

How to fix Dirty Cow vulnerability in CentOS, RedHat, Ubuntu, Debian, CloudLinux and OpenSuse Linux servers

Credit to Reeshma Mathews from bobcares.com for this

Configuring SFTP without chroot (the easy way)

So, I wouldn’t normally recommend this to customers. However, there are secure ways to add SFTP access, without the SFTP subsystem having to be modified. It’s also possible to achieve similar setup in a location like /home/john/public_html.

Let’s assume that public_html and everything underneath it is chowned john:john. So john:john has all the access, and apache2 runs with it’s own gid;uid. This was a pretty strange setup, and you don’t see it every day. But actually, it allowed me to solve another problem that I’ve been seeing/seeing customers have for a long time. That problem is the problem of effectively and easily managing permissions. Once I figured this out it was a serious ‘aha!’ moment!. Here’s why.

Inside the /etc/group, we find the customers developer has done something tragic:

[root@web public_html]# cat /etc/group | grep apache
apache:x:48:john,bob

But fine.. we’ll run with it.

We can see all the files inside their /home/john/public_html , the sight is not good

]# ls -al 
total 232
drwxrwxr-x 27 john john  4096 Dec 20 15:56 .
drwxr-xr-x 12 john john  4096 Dec 15 11:08 ..
drwxrwxr-x 10 john john  4096 Dec 16 09:56 administrator
drwxrwxr-x  2 john john  4096 Dec 14 11:18 bin
drwxrwxr-x  4 john john  4096 Nov  2 15:05 build
-rw-rw-r--  1 john john   714 Nov  2 15:05 build.xml
drwxrwxr-x  3 john john  4096 Nov  2 15:05 c
drwxrwxr-x  3 john john 45056 Dec 20 13:09 cache
drwxrwxr-x  2 john john  4096 Dec 14 11:18 cli
drwxrwxr-x 32 john john  4096 Dec 14 11:18 components
-rw-rw-r--  1 john john  1863 Nov  2 15:05 configuration-live.php
-rw-r--r--  1 john john  3173 Dec 15 11:08 configuration.php
drwxrwxr-x  3 john john  4096 Nov  2 15:05 docs
drwxrwxr-x  8 john john  4096 Dec 16 17:17 .git
-rw-rw-r--  1 john john  1734 Dec 14 11:21 .gitignore

It gets worse..

# cat /etc/passwd | grep john
john:x:501:501::/home/john:/bin/sh

Now, adding an sftp user into this, might look like a nightmare, but actually with some retrospective thought it was really easy.

Solving this mess:

Install Scponly

yum install scponly

Create new ‘SFTP’ user:

scponlyuser:x:504:505::/home/john:/usr/bin/scponly

Create a password for user scponlyuser

 
passwd scponlyuser

Solution to john:john permissions

[root@web public_html]# cat /etc/group | grep john
apache:x:48:john,bob
john:x:501:scponlyuser

We simply make scponlyuser part of the john group by adding the second line there. That way, the scponlyuser will have read/write access to the same files as the shell user, without exposing any additional stuff.

This was a cool solution to fixing this customers insecure solution, that they wanted to keep it the way they had, and was also great way to add an sftp account without requiring root jail. Whether it’s better than the root jail, is really debatable, however scponly enforces that only this account can be used only for SCP, as well as achieving sftp user access, without a jail.

I was proud of this achievement.. goes to show Linux permissions are really more flexible than we can imagine. And, whether you really want to flex those permissions muscles though, should be of concern. I advised this customer to change this setup, remove the /bin/sh, among other things..

We finally test SFTP is working as expected with the new scponlyuser


sftp> rmdir test
sftp> get index.php
Fetching /home/john/public_html/index.php to index.php
/home/john/public_html/index.php                                                                                     100% 1420     1.4KB/s   00:00
sftp> put index.php
Uploading index.php to /home/john/public_html/index.php
index.php                                                                                                                100% 1420     1.4KB/s   00:00
sftp> mkdir test
sftp> rmdir test

Just replace ‘scponly’ with whatever username your setting up. The only part that you need to keep the ‘scponly’ bit, is /usr/bin/scponly, this is the environment logging into. Apologies that scponly is so similar to scponlyuser ;-D

scponlyuser:x:504:505::/home/john:/usr/bin/scponly

I was very pleased with this! Hope that you find this useful too!

Block all the IP’s from country

So, I wrote a nice little one liner for one of our customers that wanted to blanket ban Russia (even though I said it wasn’t a good idea, or marginally effective to stop attacks). Might help with spam or other stuff though, and anyway, the customer is always ‘wrong’, it’s up to us to make sure that they do it wrongly right. ;-D

curl http://www.ipdeny.com/ipblocks/data/countries/ru.zone -o russia_ips_all.txt; cat russia_ips_all.txt | xargs -i echo /sbin/iptables -I INPUT -s {} -j DROP

Here is how I achieved it above. This bans all the IP’s from russia. But, if you aren’t very equal opportunities :(, you can ban all kinds of countries:

http://www.ipdeny.com/ipblocks/

Just take a look at this, and change the url, as such. It doesn’t matter what the variables say (even if they say russia, just change the url directly after curl). For instance

http://www.ipdeny.com/ipblocks/data/countries/pl.zone -o ips_all.txt; cat ips_all.txt | xargs -i echo /sbin/iptables -I INPUT -s {} -j DROP

I was really quite happy with this little oneliner. 😀

Cheers &
Best wishes,
Adam

Enabling Automatic Security Updates in CentOS 6, 7 and RHEL 6 and RHEL 7 (and Debian and Ubuntu too)

yum -y install yum-cron

This can also be done on Debian and Ubuntu systems if you are feeling left out:

apt-get -y install unattended-upgrades

Configuration on the CentOS/RHEL side is:

da da da da da da da ! Actually that’s all you need to do to enable it, but there are a lot of things you can customise in /etc/yum/yum-cron.conf

[commands]
#  What kind of update to use:
# default                            = yum upgrade
# security                           = yum --security upgrade
# security-severity:Critical         = yum --sec-severity=Critical upgrade
# minimal                            = yum --bugfix update-minimal
# minimal-security                   = yum --security update-minimal
# minimal-security-severity:Critical =  --sec-severity=Critical update-minimal
update_cmd = default

# Whether a message should be emitted when updates are available,
# were downloaded, or applied.
update_messages = yes

# Whether updates should be downloaded when they are available.
download_updates = yes

# Whether updates should be applied when they are available.  Note
# that download_updates must also be yes for the update to be applied.
apply_updates = no

# Maximum amout of time to randomly sleep, in minutes.  The program
# will sleep for a random amount of time between 0 and random_sleep
# minutes before running.  This is useful for e.g. staggering the
# times that multiple systems will access update servers.  If
# random_sleep is 0 or negative, the program will run immediately.
# 6*60 = 360
random_sleep = 360


[emitters]
# Name to use for this system in messages that are emitted.  If
# system_name is None, the hostname will be used.
system_name = None

# How to send messages.  Valid options are stdio and email.  If
# emit_via includes stdio, messages will be sent to stdout; this is useful
# to have cron send the messages.  If emit_via includes email, this
# program will send email itself according to the configured options.
# If emit_via is None or left blank, no messages will be sent.
emit_via = stdio

# The width, in characters, that messages that are emitted should be
# formatted to.
ouput_width = 80


[email]
# The address to send email messages from.
email_from = root@localhost

# List of addresses to send messages to.
email_to = root

# Name of the host to connect to to send email messages.
email_host = localhost


[groups]
# NOTE: This only works when group_command != objects, which is now the default
# List of groups to update
group_list = None

# The types of group packages to install
group_package_types = mandatory, default

[base]
# This section overrides yum.conf

# Use this to filter Yum core messages
# -4: critical
# -3: critical+errors
# -2: critical+errors+warnings (default)
debuglevel = -2

# skip_broken = True
mdpolicy = group:main

# Uncomment to auto-import new gpg keys (dangerous)
# assumeyes = True

Checking a Website or Rackspace Load Balancers Supported SSL Ciphers

So, you may have recently had an audit performed, or have been warned about the dangers of SSLv3, poodle attack, heartbleed and etc. You want to understand exactly which ciphers your using on the Load Balancer, cloud-server, or dedicated server. It’s actually very easy to do this with nmap. Install it first, naturally.

# CentOS / RedHat
yum install nmap

# Debian / Ubuntu
apt-get install nmap

# Check for SSL ciphers

# nmap hostnamegoeshere.com --script ssl-enum-ciphers -p 443

Starting Nmap 6.47 ( http://nmap.org ) at 2016-10-11 09:12 UTC
Nmap scan report for 134.213.236.167
Host is up (0.0017s latency).
PORT STATE SERVICE
443/tcp open https
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA - strong
| TLS_RSA_WITH_3DES_EDE_CBC_SHA - strong
| TLS_RSA_WITH_AES_128_CBC_SHA - strong
| TLS_RSA_WITH_AES_256_CBC_SHA - strong
| compressors:
| NULL
| TLSv1.2: No supported ciphers found
|_ least strength: strong

Nmap done: 1 IP address (1 host up) scanned in 1.57 seconds

In this case we can see that only TLS v1.1 and TLS v1.0 are supported. No TLSv1.2 and no SSLv3.

Cheers &
Best wishes,
Adam