Most quality web hosting provides customers with shell access to the web server, or when cases where they don't, usually something like PHP is installed that usually allows for arbitrary execution.
On a web server that hosts a few thousand sites, using the Bing IP Search, you can find a list of all the domains. Usually there will be a lowest hanging fruit that's easy enough to pluck. Or, if you can't get shell access through a front-facing attack, you can always just sign up for an account with the hosting company yourself.
So once you have shell, then it's a matter of being a few steps ahead of the web host's kernel patching cycle. Most shared web hosting services don't utilize expensive services like ksplice and don't want to reboot their systems too often due to downtime concerns. So usually it's possible to pwn the kernel and get root with some script-kiddie-friendly exploit off exploit-db. And if not, no doubt some hacker collectives have repositories of unpatched 0-day properly weaponized exploits for most kernels. And even if they do keep their kernel up to date and strip out unused modules and the like, maybe they've failed to keep some [custom] userland suid executables up to date. Or perhaps their suid executables are fine, but their dynamic linker suffers from a flaw like the one Tavis found in 2010. And the list goes on and on -- "local privilege escalation" is a fun and well-known art that hackers have been at for years.
So the rest of the story should be pretty obvious... you get root and defeat selinux or whatever protections they probably don't even have running, and then you have access to their nfs shares of mounted websites, and you run some idiotic defacing script while brute-forcing their
The moral of the story is -- if you let strangers execute code on your box, be it via a proper shell or just via php's system() or passthru() or whatever, sooner or later if you're not at the very tip top of your game, you're going to get pwn'd.
I'm a good friend of John, the blog post author, and have been working with him throughout this process in trying to unravel Hamstersoft's deceit. I want to make a few things pretty clear:
Yes, they posted a zip of code on a hard-to-find link. But they did something sneaky. They included the very short and trivial C# wrapper around Calibre, but they only included a compiled (well,
Cheap trick.
The other thing to take notice of in John's post is that in fact the search engines and Facebook have hardly complied -- there are still search results and Facebook pages for this company. Now, you can debate and troll and bikeshed and argue the validity and ethics of the DMCA all you want, but the fact of the matter is that when the big companies want to use it against the small, it seems to work, but when some OSS devs want to take the case up with giant companies, the response is exceedingly lackluster. (Likely, this being on
The final point to consider is what this all means for GPL and OSS. Hamstersoft is Russian, so good luck trying law suit or anything. But at the very least, shouldn't the OSS community have an army of lawyers willing to work probono, or financed by various foundations, for this kind of thing exactly? John mentioned he tried contacting one such organization, and was unsuccessful. He's told me that at another point, he got in contact with a lawyer from another place who didn't offer to do any work for him but vaguely suggested he send these notices to Google, Facebook, etc. That's pretty lackluster. I don't want to complain to loudly, but instead I just want to suggest that this issue call our attention to the bigger issue -- what institutions do we have in place to protect OSS software effectively as small OSS devs? Do such institutions work? In this case, thus far, they don't seem to be working.
You're mistaken. It's actually implemented by running the client-side javascript page in a headless webkit browser: Simple code.
Though, google recommends using HtmlUnit.
Actually that's not true. With the AJAX Crawl Spec, search engines can read AJAX pages.
http://code.google.com/web/ajaxcrawling/docs/getting-started.html
Basically, it rewrites domain.com/#!/someajaxstate to domain.com?_escaped_fragment_=someajaxstate, and then it's the responsibility of the server to render it statically. PhotoFloat uses HtmlUnit to do serverside javascript execution. I wrote about it here: http://blog.zx2c4.com/589 .
The big question for me is this--
The download link only allows you to get the encoded FLV file. Does this mean they failed to store the originals? And if this is so, does that mean YouTube would be serving up the old fashioned h.263 FLV low quality encodes? If that's the case, we'd be much better off _not_ using the auto-move service, as YouTube encodes at much higher quality than Google Video did.
Or, did they just not want us to be sucking their bandwidth by allowing us to download the original footage, but they'll happily transfer it in-house over to YouTube?
Anyone have any pointers?
Ingo,
I believe most desktop users run into this problem when they complain about IO schedulers. Is there any immediate plan to address it?
Thanks,
Jason
"If I do not want others to quote me, I do not speak." -- Phil Wayne