You’ve probably heard about the big DNS attack on Dyn’s servers today, and you probably know that it was the main cause of “half the internet” not working properly today.
The attack targeted Dyn’s DNS servers and took down a shitload of websites (well, not really took down, but I’ll explain further along), including Netflix, WSJ, Imgur, Reddit, Spotify, etc. According to CNBC citing Dyn “tens of millions of IP addresses” were sending packets and causing mayhem for the U.S. based company.
In simple terms, DNS is basically speed-dial for websites. You don’t memorize 184.108.40.206, you memorize “google.com”. Well, think of the IP as being the phone number, the domain name as the speed-dial button. If the speed dial doesn’t work, you gotta use the full phone number – so you could go to Google.com by typing in 220.127.116.11 if “Google.com” isn’t working, but you probably don’t have a post-it stuck to your PC saying “In case of emergency, dial 18.104.22.168 for Google”.
DNS is old, clunky, and messy. After having set up an old school physical server with everything a server should have, from a DNS server down to installing OSSEC to receive live alerts regarding the security of it, I can honestly say that DNS is horrible. There are about 40 RFC’s (standards, good practices, etc.) that you should read to properly set up DNS for a server, and all that on a website that looks like something Stallman would see on his PC while browsing any website back in the 90’s. Oh look, here you can see an RFC from August 1990 and one from December 2015. In the same format, albeit a different colour scheme.
I am being a bit pedantic, I admit, but you’d expect 26 years to be enough for a widely used technology to properly evolve further than adding IPv6 and DNSSEC. Open a Private Window, query Google for “dns” – I challenge you to find a link to anything official on the first page. You won’t, nor would anyone who just got a job as a SysAdmin at DynDNS – despite the fact that he should be properly trained, stuff like this still happens, just like some doctors screw up operations.
How about stopping for a bit and challenging the IESG (Internet Engineering Steering Group) and IETF(Internet Engineering Task Force) to actually assess the situation at hand, and decide upon actual improvements, creating proper documentation and generally creating a proper professional environment regarding the technology so that you don’t have to open 40 tabs and read documentation that you may or may not need. Making the information more easily accessible and readable means more people will actually go through it and that means more security.
There have generally been two trends in the internet world – centralization and decentralization. The latter being prefered by security enthusiasts, the first being prefered by snooping eyes and big companies, not necessarily in that order. It’s amazing really that DNS is still a centralized service which makes it vulnerable to such attacks – despite the standards requiring you to have at least 2 DNS servers in different IP classes, “preferably in different buildings, with different ISPs”, they’re still vulnerable. A decentralized system (look at the blockchain for example) is virtually unhackable, you just have your IP – domain list in your PC and nothing can happen to it, albeit there is a sizeable size issue with the number of domains registered having skyrocketed lately. (This is a gross oversimplification, but the idea is the same.)
Regarding the attack – people will rush to say Russia, because some countries just love cold wars and trips to space – but in reality we just don’t know who did it. A few weeks ago the first IoT DDoS attack took place in which some 50,000 IP Cams attacked Brian Krebs’ website and took it down. The cams were infected with Mirai, a (now) open source botnet that managed to make its way around the globe.
The people at Incapsula poked around in the source code and found something pretty silly, whoever created Mirai hardcoded a list of IPs that ought not to be disturbed (IPs found on a wiki page) that belonged to the US DoD, General Electric, HP, IANA and the US Postal Service. This smells like something a skiddie would do, and probably is.
So, seeing how tools like Mirai are easily available on GitHub and other channels, why is anyone rushing to blame Russia and not some bored kid somewhere in Argentina? I mean, Russia could’ve done it, but don’t say it without having some actual proof.
That’s the nickname given to CVE-
Linux, is found on 65% of the top one million web servers in the world right now, so that’s about 650,000 web servers that are vulnerable to this bug. If half of those are VPSs with 5 users/server we come up with 325,000*5 = 1,625,000 possible privilege escalation opportunities. The problem is that despite the fact that Linux is said to be the most secure OS, and Open Source at the same time, bugs like these exist and we won’t know until they’re being used to exploit our “good guy” servers. If a kernel vulnerability that allows root access hasn’t surfaced despite living for 9 good years, what other “smaller” vulnerabilities do exist in the kernel?
I mean, Linux is free and anyone can contribute but there are companies like RedHat that are using the Linux Kernel and creating software that requires expensive (I think) licenses, so you do expect a lot of security when _actually paying _for what’s basically a free product that already has a good record of security with the added license and goodies that RedHat bundles in with its OS.
We should keep in mind that Linux development is way different than that of say Windows or Apple’s OS X (or whatever they call it now). Linux is basically just a kernel, so developers work on tiny bugs (or big ones) and small optimizations, new hardware compat and generally stuff that you don’t really see on your screen. Either way, you – the end user – don’t just install the Linux kernel, you get a distro like Fedora, Debian, Ubuntu or Arch if you’re into BDSM and those things. There are a lot of distros, and I mean A LOT of distros (but personally, Debian is king, Elementary OS is nice). A distro has a Linux kernel and a GUI, among other things.
So, with another gross simplification we have:
- A dev team working on the Linux Kernel
- Another dev team working on the general distro (e.g. Debian)
- Yet another dev team working on the GUI (e.g. KDE)
Keep in mind that these are all different projects, with little or no connection between them whatsoever. Also keep in mind that there are 20+ actually used Linux distros, and 5+ actually used GUIs. We run into the same problem as the Android OS – fragmentation.
There is such a huge amount of work simply wasted on reinventing the wheel over and over again, dev teams creating the same menu bars again and again to create “a new GUI” or to improve said GUI, hundreds of people creating new “distros” by just repackaging something already existing but using a different GUI and so on.
As a conclusion, after steering off topic a bit, can we stop just building cities on thin ice and actually improve the said thin ice? Let’s rehaul DNS, or rethink it, or stop using it altogether. How about this – any company that’s maintaining a distro should have (ehm, as a good practice or something that will make them seem better in the eyes of the consumer) at least 5 researchers (should be called a LINUX KERNEL TASK FORCE) dedicated to Linux kernel development – slightly communistic, but they should be paid for this work so that solves the communistical problem.
**Regarding the vulnerability – **You’ll be delighted to hear that it has been patched and a messenger pigeon will make its way downstream to your local linux distro maintainer to further fly even lower to your server with the patched kernel update, so please expect him with some water and a _apt-get update && apt-get upgrade _prepared.