Yesterday, a vulnerability in an old version of Revolution Slider was reported. The vulnerability allows visitors to view arbitrary files on the web server, like wp-config.php, without being logged in. All you need to view any file on the server is to know the location of the file and for the web-server user to have permission to view it.

According to ThemePunch, the plugin developer, the vulnerability was patched 29 versions ago in February, but they decided not to publicize the severity of the issue, aside from a single ‘fixed security issue’ line in their change log. This was because:

“[We were] told not to make the exploit public by several security companies so that the instructions of how to hack the slider will not appear on the web.”

As a result of this negligence and the way that Revolution Slider is Updated and bundled with themes, this simply left any website not running a recent version of Revolution Slider vulnerable for months to an extremely serious file inclusion vulnerability.

Its Your Fault For Not Updating

This seems to be the company line for this issue. After the vulnerability was made public, they have stated:

“You should always keep the slider up to date like any other WordPress component but urgently need to do this when using Version 4.1.4 or below in order to fix the security issue. [...] We are sorry for you guys out there whose slider came bundled with a theme and the theme author did not update the slider. Since you cannot use the included autoupdate function please contact your theme author and inform him about his failure!”

And it is true, you should keep your plugins updated.

However, this is a paid plugin and doesn’t allow easy updates like a normal wordpress plugin does. Further, on all of the sites I fixed, they didn’t appear to have a nag telling the user an update was available from the backend. So, unless you are a developer and actively visited the plugin’s website, you wouldn’t even know the plugin needs to be updated, let alone an extremely serious security vulnerability.

As they mention, they sell a developer license that allows developers to include the plugin in their theme. When the plugin is included with themes, you can’t update it without updating the theme. So, any theme that isn’t regularly updated is at risk. And, since some shoddy developers edit the theme directly, rather than making a proper child theme, it isn’t always easy to update the theme. This, of course, isn’t the fault of ThemePunch, nor is it good practices to develop like this, but it does happen and is going to be a legitimate problem for people.

Even if you actually have the premium plugin itself, you can’t just update it. There is no auto-update feature(at least not in the vulnerable versions I saw,) so you can’t update it like you would a regular WordPress plugin. Nor, to my knowledge, is there an update nag on the plugin page telling users they need to update. Instead, you need to download the premium plugin, which requires a login to the site that sells it. 

The catch here is that most website owners aren’t going to have access to the login information needed to update the plugin. Your average website owner isn’t a developer. They probably paid someone to create the theme, who presumably installed a valid copy of the plugin. Unfortunately due to the nature of web design, this simply means that hundreds(thousands?) of sites are silently vulnerable to an extremely serious vulnerability and won’t even know it, unless they have a responsible web developer or host. Again, this isn’t the fault of ThemePunch, but is a fault with the premium plugin model when it doesn’t allow for quick/easy updates.

Negligence Through Security Through Obscurity

According to the plugin author, this vulnerability was fixed in February, but they chose not to report it. It has been reported that this vulnerability was publicly disclosed months ago and regardless, it is safe to say that it was known by some people for the past few months.

By choosing not to report the vulnerability and making site owners aware of this huge security risk, they effectively pushed back the date where we found out about it leaving their customer’s sites vulnerable to a known attack. And, now that is is released and being exploited like mad, we are left scrambling to fix it anyway. So, not reporting it only helped the bad guys.

I understand fully that this is a paid plugin and the need for them to protect it. I get that. And, I understand that you should keep your plugins updated. Nothing in their statements that I have seen is untrue.

However, in the event of a serious vulnerability like this, not making a valid attempt at reporting it, especially when you know that your plugin doesn’t get updated frequently and the vulnerability likely impacts a large number of sites, is negligent.

Updating a Plugin You Can’t Update

I don’t use this plugin on WordPress templates I develop, but it is used by several clients that I host. I found it bundled in two client’s themes and installed as a plugin for two other clients. All 4 had their wp-config.php file downloaded already and all sites on my servers have been scanned for this vulnerability already.

I wrote a quick and dirty patch for the outputImage function, which you can view here. This is only meant as a temporary fix, until you can assess the issue and do a proper update, but since this attack is ongoing and widespread, you should take some sort of action asap.

Mod_Security also appears to block the attack.

please_upgrade_your_please_upgrade_pageIf you have visited windows.microsoft.com lately using Internet Explorer 7, you would probably see the “It’s time to upgrade your browser” nag, which explains that IE7 and IE6 no longer supported and blocks you from browsing their site until you upgrade.

This is a great step, as even with XP support going away, Vista shipped with Internet Explorer 7, so it will not be dead for some time. When they first started doing this on the Windows site, I thought it was cool that they were finally doing something to clean up the mess they created with their fragmented browser ecosystem.

However, Internet Explorer 8 is still a pretty bad browser…certainly better than IE7, but that isn’t saying much.

If you are going to break your website to force an upgrade, it would be great to use that as a platform to get them into the latest version of Internet Explorer that you can. So, if they are on Vista, go ahead and tell them to upgrade to IE9. Better yet, go ahead and add an optional tool they can use to verify automatic updates is on and set to update automatically, as if they are running IE7, they may not be getting security updates either. And, if they are on IE8, go ahead and add a nag for that too! Although, I think that one may be trickier, as in a in corporate environments, upgrading past Internet Explorer 8 may not be possible. So, rather then fully breaking the site, probably a nice warning would suffice. Be a nice kick in the butt for companies that haven’t upgraded yet as well.

This seems like the right thing to do, especially as dropping support for IE8 has already begun on a number of popular websites. Even Microsoft’s Office 365 has recently announced they are no longer supporting IE8.

I recently had a computer repair for someone who needed me to downgrade Windows 8 to Windows 7, because Windows 8 was not compatible with their work software. For anyone that hasn’t done it, UFI can make this process a bit tricky, as access to the Bios can be limited and booting to removable media troublesome.

When I returned to their house, I did the basic setup and familiarization with them, to make sure they were comfortable with how everything worked, discuss anti-virus, some of the tools I pre-install when I do a Windows re-installation.

They wanted to run their work software while I was still there and I got a pleasant surprise when they booted up an Ubuntu based Live CD.

The company they work for is through Arise, which offers virtual call centers. They were in the process of training to work with Sprint, so I am not 100% sure how the process is after training. However, since they are training on a Linux Live environment, I would be surprised if they didn’t use that for actual work as well.

I didn’t do anything aside from making sure it would boot, but it looked like a very minimal Gnome Install, possibly Gnome 2 or at least classic shell. There were only icons for Firefox, a calculator, and an icon for some minimal settings. It utlized a bootable USB alongside a CD.

I thought this was really cool and a great idea. Linux runs great on most hardware and you can get awesome performance out of older hardware, where Windows would be slow once you started doing any real work.

Using a Live CD is also much more secure, like using a Live CD for your banking, as every-time you restart, it should be to a known-good operating system and programs. I would imagine this also gives the company a lot more control and monitoring capabilities, which they wouldn’t have if they let people run their personal computers. Since so many people’s personal computers have some form of Spyware or Malware, I see this as a no-brainer for companies that have these sorts of remote operations.

Not all companies do this though, I have worked on the computer of someone who worked for American Airlines before. He had worked for American for a while as a call center rep and took the opportunity to work from home when they offered it to him a few years ago. As far as I could tell, there wasn’t any sort of required anti-virus and very little over-site to what was running on the person’s computer, aside from running American’s call center software. Pretty scary when you think about how often their call center reps probably deal with credit card information and other confidential email during the day.

While viewing Softlayer’s Website today, I noticed something rather amusing…they are using Amazon’s Cloudfront to deliver at least one script on their homepage.

Softlayer is a hosting company, which does a lot of dedicated hosting. However, especially since being acquired by IBM, they have been heavily promoting the cloud side of their business. So, it is sort of funny that they are using their biggest competitors service on the homepage of their website to serve up javascript, rather then their own Cloudlayer or other infrastructure.

Of course, you should use what works, works well, and is easy, which in many cases is an offering by Amazon AWS. However, if your goal is to ‘to Accelerate Adoption of Cloud Computing in the Enterprise,’ you should probably keep your hosting in-house as much as possible and not showcase the reliability and reach of your competitor.

As an aside, this reminds of a similar situation a few years ago when Dreamhost was having some major downtime. I noticed that the Dreamhost status blog was always up, despite probably getting crazy traffic from all of their customers, while their shared hosting hosting infrastructure was seeming to crumble. I checked and saw they were using Linode to power their status blog, still do. That ended up being a good recommendation, as I switched to Linode shortly after that and have been overall happy with them ;)

softlayer_vs_amazon

See bottom of post, for the TLDR problem/solution:

I have been using XFCE for some time now and overall really enjoy it. I switched to XFCE after giving Gnome 3 a go when it first came out and have been using it since. It has gotten a lot better since then too.

For instance immediately after switching, one of the only things I missed from Gnome2 was tabbed file browsing. Thunar, the default file manager for XFCE, got that awhile ago and has generally been improving a lot.

Another change to XFCE is the way it remembers your desktop settings, windows, and programs when you logout. I admittedly have not researched this as much as I should, but anecdotally I noticed some changes to how this works when I upgraded to a newer version of XFCE recently. I also noticed that there seems to have been a change with the way that XFCE deals with multiple monitors, as after upgrading certain programs starting using the entire width of two monitors when initially drawing their windows, rather then using a single monitor as they had in the past.

Onto the problem: After getting new monitor that supported a higher resolution(1920×1080) and updating my xorg.conf, my resolution would get reset to my old resolution(1680×1050) as soon as I logged back in to XFCE.

I use Nvidia drivers, as I have found them to offer a bit better performance and support for a multi-monitor setup, not to mention generally quite easy to configure. Its been awhile since I tried using it, but the built-in display manager for XFCE has not been well suited to using multiple monitors in the past, while using nvidia-settings provides a nice easy to use GUI for arranging and setting up displays.

I tried several different things with my xorg.conf and nvidia-settings, including removing it all together, as well as a variety of different configurations. However, no matter what I had in my xorg.conf, as soon as I logged in the resolution was reset to the old resolution. It seemed like XFCE was ignoring my xorg.conf settings or overriding them.

I was fairly confident that the xorg.conf was correct, so I began looking elsewhere. I grepped my ~/.config folder for my old resolution and did in fact find it listed the old resolution in: xfce4/xfconf/xfce-perchannel-xml/displays.xml.

I tried changing it there to the new resolution, however it still reverted back to the old one. Finally, after being a bit fed up and fairly confident that the saved settings/sessions were to blame, I moved my config folder to a backup: mv ~/.config ~/.config_back

This unfortunately has the side effect of clearing all(or most) of your saved XFCE settings, but as soon as I did that, it started using the new resolution. I have in the past done some messing with xrandr settings in order to get multi-monitors working better, so it is possible this is my own doing, but there was definitely some xfce setting in my config that was reverting the resolution.

This is something that I should learn more about and rtfm a bit, but sometimes killing it with fire works and is the easiest/quickest solution…

TLDR:

Problem: After getting a new monitor, the resolution specified in Xorg.conf was ignored when logging in to XFCE. Instead, each time I logged in, it reverted to the old resolution.

Bad Solution: This is probably not the best way to address the problem. However, moving ~/.config to ~/.config_back cleared out whatever xfce setting was over-riding my xorg.conf and let me use the new resolution.

Caution: Again, this isn’t a good solution, but it worked. If you do the above, it WILL delete all of your XFCE settings, like panels! A better solution would be to learn why/where that setting that maintains old resolution is kept and changing it!

ipad_compatability_issue_sRecently, I ran into a weird issue with a 1G iPad and had to figure out a work-around to install apps on it.

One of my clients inherited an old first generation ipad from a friend. Before he gave it to him, it was wiped and factory restored.

After setting it up, when attempting to install certain apps, like Google Maps, Google Chrome, or Netflix, he got the following message: This application requires iOS 6.0 or later. You must update to iOS 6.0 in order to download and use this application.

Of course, the last version of iOS that was supported on this iPad was 5.1.1 and their suggestion is not possible.

After talking to a friend that has an old iPad and doing some reading, it seemed like most people would get a prompt to download the last compatible version of the app. However, even after wiping it again via itunes and making sure everything was setup, it still wouldn’t let us install old apps.

However, after a bit of playing around, I figured out a workaround that let us install both Netflix and Google Chrome on the app.

The Problem

When installing an app on a first generation ipad, a warning stating ‘This application requires iOS 6.0 or later.’ is shown and installation is blocked.

The Workaround

  1. Install iTunes on a computer and Sync iPad
  2. Install desired apps via itunes onto the ipad
  3. Wait until the apps finish downloading, unplug the ipad
  4. The apps will attempt to install, but will hang on the ipad
  5. Delete the apps from the ipad
  6. On the ipad, go into the app store and re-install the app
  7. You will now be prompted to install the last compatabile version of the app

older_version_ios

This worked for both Netflix and Chrome, however Google Maps, which we did NOT install via iTunes first still gave the upgrade needed error.

Why does this work?

I can only guess, but it seems like at some point Apple changed their policy on old devices and started allowing people to install older versions of software on their devices. I found a reddit thread from 2 months ago that discussed the change.

However, we were using a brand new iCloud/iTunes account, which had never installed any apps.

So, presumably, Apple only allows you to install compatible versions of Apps you already own. When I asked my friend, he had no issue installing any app, including Google Maps, which was not already on his 1G iPad. However, he had installed it before on other devices. By installing it first via iTunes, even though it doesn’t actually work, Apple will then allow you to install an older version…

php_may_harm_your_computerSome time yesterday, Google’s Safe Browsing service detected malware on PHP’s main site, php.net. As a result, if you visit it right now in a browser that uses Google’s Safe Browsing list, like Chrome or Firefox, you will get a warning message and when viewing it in Google serps, you will see the ‘This site may harm your computer’ warning.

I use php a great deal and think that a lot of the dislike/feelings people have against the language are misplaced, but I do see the humor in the warning message showing up when you search for ‘php.’

Were PHP’s Server’s Compromised?

Ramsus, as well as a few others involved with PHP, have stated on Twitter and in a Google Groups thread that the file in question, ‘userprefs.js,’ was not compromised. In a Tweet from this morning, rasmus said ‘They[Google] point to a js code injection which was deliberate’

However, in the same Google Groups thread, someone from Google indicated the userprefs.js file had changed and on YCombinator, someone found a version of the file in their cache which had what appeared to be an obfuscated javascript payload in it. The same google employee also later posted on the YCombinator thread, stating quite clearly that it was not a false positive and that the obfuscated version was similar to what they found.

I checked a number of PHP mirrors and while I did find two different versions of userprefs.js, neither were the obfuscated version.

Will update this post with some more later, as it becomes available.

Update 2013-10-24 13:00: As of now, the warning message is no longer appearing when doing a google search and visiting the site doesn’t result in an warning, so it appears that the Php.net has been removed from the safe browsing list. Haven’t seen an update from Ramsus or others with any more details yet.

Update 2013-10-24 17:00: An update has been posted to PHP’s News Section and confirm that they were compromised. They state that an rsync job was reverting changes being made to userprefs.js, presumably because the local server was compromised. An initial code review has been preformed and they don’t think the PHP source was compromised, but are working on a more thorough review and post mortem.

Update 2013-10-26: Another update has been posted to PHP’s main website. They state that two servers were compromised, likely between 10/22/2013 and 10/24/2013. During this time, they served up javascript malware. The servers were responsible for hosting php.net, static.php.net, git.php.net, and bugs.php.net, but they do not think the php source or any of the downloads were compromised. They have reset their SSL certificate, as well as migrated to new servers, and are looking into the cause of the issue.

Follow

Get every new post delivered to your Inbox.

Join 44 other followers