Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Not a narcisisst (Score 4, Interesting) 140

There is good engineering and engineering a successful product. Edison was much better at understanding the latter, he also understood and played the patents system. He was in the end by far the better capitalist / businessman, hence he won, financially, and winners write the history books.

Before writing Tesla down as always the great engineer who never got successful, it is worth remembering that he did make a fortune (tens of millions in today's money) from his AC patents before he gave up on the royalties, but he died a pauper because he blew his fortune self-funding research into ideas that were much less good - too confident in his own promised results, he sunk all his money into ideas that just didn't work.

Games

Infinite Browser Universe Manyland Hits 8 Million Placed Blocks 67

j_philipp (803945) writes Manyland [Here's the twitter feed and a FAQ] is an HTML5 / JavaScript-based MMO universe created by a community and two indie developers from Europe. Everything in the world can be freely drawn and placed: From the cars, animals, plants, houses, bridges, to everyone's own bodies. Like Wikipedia, by default areas are editable by everyone (and removing a block leaves dust which can be used to undo the removal). Since the opening a year ago, over 100,000 different creations have been made, and now, over 8 million blocks placed. Some features are for logged-in users only, but the whole thing is free to explore for everyone, and it's just sucked away quite a few minutes for me.
Graphics

NVIDIA Presents Plans To Support Mir and Wayland On Linux 80

An anonymous reader writes: AMD recently presented plans to unify their open-source and Catalyst Linux drivers at the open source XDC2014 conference in France. NVIDIA's rebuttal presentation focused on support Mir and Wayland on Linux. The next-generation display stacks are competing to succeed the X.Org Server. NVIDIA is partially refactoring their Linux graphics driver to support EGL outside of X11, to propose new EGL extensions for better driver interoperability with Wayland/Mir, and to support the KMS APIs by their driver. NVIDIA's binary driver will support the KMS APIs/ioctls but will be using their own implementation of kernel mode-setting. The EGL improvements are said to land in their closed-source driver this autumn while the other changes probably won't be seen until next year.

Comment Re:Can someone explain... (Score 1) 69

Yeah and who exactly is this afflicted user? Right, normally apache or some other unprivileged user who has relatively little power though granted you don't even want unprivileged users logging in from the Internet

For some, the ability to run their own code on a server with high bandwidth outward connection to the internet is all the power they need/want.
If the server is an authorised mail source for a domain (e.g. spf) then so much the better.
If the user has access to some writable disk space that can be used to host some interesting files, then there are uses for that.
If the user can read the web site source or config files that may have value in itself or may lead to further penetration - but I'm sure you've never seen DB user/pw in the source for a production website, ever ?

Comment Re:Perjury (Score 5, Insightful) 191

C. Like it or not, the bitcoins represent evidence. Seizing evidence is par for course in any criminal case.

Is selling the evidence off before the trial (which it was seized to be used in) also par for the course ?
What happens to seized evidence after a not-guilty verdict - or do certain people already know that is not a possibility ?

Comment Re:Paging Arthur C. Clarke... (Score 1) 534

Nothing was stopping the angels either, per Genesis 6-4 "when the sons of God came in unto the daughters of men" and begat the Nephilim (or were the Nephilim depending on your translation). Enoch appears to make clear they are angels, but Enoch is apocryphal so you may or may not believe it enlightens the story in Genesis. It matters not though, because the story _is_ in Genesis which is canon, and whether the "sons of gods" (or Enoch's Watchers) are angels or aliens, they are clearly not-men but are capable of interbreeding with "daughters-of-men" and producing a race of giants (Nephilim).

As regards opening up a whole new mess, all of this is post-Eden and pre-flood, man can always complain to God: "you said I had dominion over everything so what are these more powerful beings doing taking all the good looking girls" , and God will say "yeah and ? Can I remind you that was back in Eden when I also told you not to eat the ****ing apples ?".

Comment Re:highly damaging to linux on the server (Score 1) 329

At least with Linux, if a security hole is found (and made public or released to experts in the security community or to the relavent developers or whatever), the number of people who are able to investigate and fix the hole (and make official or unofficial fixes available) is (in most cases) significantly larger than the number of people who would e able to deal with issues in Microsoft code. And the Linux guys can have patches out much faster (and they can get into distros fairly fast too)

There is a flip side to that, which is that too many cooks with no common management may royally spoil the broth, which is what has happened.

When found, this was embargoed and should have been fixed under embargo, properly. Instead, someone pushed out half-baked patches that were proved to be still vulnerable within hours (if not minutes), and by then everything was public. So now we have everyone and his dog patching in panic mode, three, four or more(?) sets of patches now, with each one still proving vulnerable, because collectively we blew our chance to fix it (and test it) properly the first time.

The eventual fix will have to be to stop playing who-finds-the-parser-bugs-first and accept that the whole feature is a security hole and needs to be removed or radically changed - something that could and should have been predicted at the start. There are various patches flying around now that do just that, probably in different incompatible ways. Those changes will, inevitably, break some stuff - and of course we could have looked and tried to figure out the extent of that, and mitigated the effect and been ready with the announcement, if it were still under embargo, but we blew that chance.

I am all for saying that the open source development model is better - but this really is not the best example to use to illustrate it, in fact it's already heading into embarrassing farce territory.

Comment Re:Remote Exploits (Score 1) 329

I can't imagine the stupidity of these security folks or programmers. bash is not remotely, remotely exploitable. Hint, there a program that listened to a network connection, then, it decided to try and pawn off security and trustworthiness to another program. That trust was misplaces and that program is wrong.

If you accept bash commands on a network connection and feed them to bash, then you are an idiot, full stop. Now, if you validate the arguments, the format and the content of the command, for example, you allow two formats true, false. Then, reasonably you can call bash to run those two commends. What you can't do is accept any arbitrary string and pass it to bash.

Problem is that in unix the way is for tools to call other tools, and the shell is so pervasive it could be anywhere. How do you actually _know_ if you are calling bash ?

* If you run /bin/sh are you calling bash ? Should you always check first ? Can you even check ?
* same for calling system() etc.
* If you run any other executable should you always check whether it is a script first ? Can you even check always ?

And this applies all the way down the pipeline, parts of which may be user/admin configurable.

Accept what you prove is trustworthy. Be willing to fix it, and claim responsibility when you are wrong.

No meta characters in environment variables, trivial, not meta characters in arguments, trivial. The argument should be a number, 0 to 255, check it _first_, then pass it.

Where in the pipeline ? What if you don't _know_ what the data is supposed to be at this point (only some other tool might do, down the pipeline, how then do you prove it is ok ? What does "no meta characters" mean in the context of an HTTP Cookie, or an email subject line ? - it is meaningless, arbitrary text is valid data in either case.

I thought we learned this with all the sql commands from the network bugs. We seemed to do the right thing for them, why do they have it so wrong this time around?

What we learnt was to properly sanitize and/or quote text that was executed as SQL commands. At the point that we knew it was going into a SQL command. Your web server doesn't know what you are going to use each form field for, "Robert'); DROP TABLE Students" is perfectly valid as a subject for a forum post, but as a user name in a SQL command it needs quoting first. The web server can't quote it, the cgi script or whatever down the line needs to quote it - if and when it gets used in SQL. Now, the web server also does not know and should not need to know what language a cgi script is in, but if it is bash, then bash will execute data as code before the script has chance to verify it.

This generalises to the whole pipeline - any caller may not know what the data it is passing down the pipeline in the environment (or otherwise) actually _means_, or what is valid/non-valid/malicious. Such determination can only be made by the callee that understands the data - but bash may execute it before it gets there.

Let's assume you are right and we can compare with SQL Injection. With SQL Injection we learned we had to properly quote data before feeding it into SQL, and fix all the places we do that. So, with bash injection, now that we know about it, to do the same we would need to sanitize / quote all environment variables whenever we call bash. Except we don't know when that is (see above), so you really mean that every program that execs any other program or feeds into any other program in a pipeline where that program might possibly be a shell script, needs to quote environment variables. That is just about every unix program ever written (and, er, where/when are you going to unquote that data ?).

Or we fix bash, the program that is actually blindly executing data as code.

Comment Re:Not so public disclosure (Score 2) 159

Worth listening to sales, sometimes there is value in what they say, and you need them on side - it isn't fun (or good for job security) writing great software that they can't or won't sell. The suggestion is good as far as it goes but I would say it isn't as simple as that.

Unless you are open source, your dev teams bug trackers should be their area and confidential to them, they may not all be commercially aware customer facing people, they may speak their mind in choice terms about some bug reports, and you shouldn't have to be constantly afraid of that turning up in front of the customer. It isn't just dirty laundry, it's the whole cleaning process and every dirty comment you make whilst doing it.

Then, each customer has their own issues which should be confidential to that customer (and you). Some issues may not be bugs but embarrassing customer mistakes, and customers _will_ (sooner or later) put information into an issue that is commercially confidential, personal information (data protection) or security related. Sometimes this is necessary - bugs may only be tripped by certain data, and it isn't the customers job to determine the general case, it's your job when investigating. Support and dev need to understand that issues are private to a customer, but also sales so they can explain it positively to prospects: "can we look at your issue list ?" "we can't show you customer reported issues because we treat them as confidential to that customer - we believe customers should be able to report exactly what happens which may include their data, rsther than be required to create a general test case". Customer issue lists are not a concern of sales but rather account management - the customer account manager may be in sales, but if so it is a different role they are doing.

Then you (maybe) have general issue list(s) that are visible to all customers - effectively a dynamic tracking view of what will be in the release notes (see below). You need to re-create issues here from the customer reports - not just blindly copy customer report - and really these should be general cases not just what the customer reports. E.g. customer reports issue with Web UI display of account named "P&O", you establish it's an issue with ampersand, and what you report to all customers is that there is an issue with ampersand, otherwise you risk reporting too narrow a bug and exposing customer confidential information. Not every issue will make it to this list, only if it is of general concern, and issue that affects a configuration only used by two customers who have already been informed has no relevance here. I still see little value in this list being public beyond customers, but it should be expected that it might become public at some point (possibly via a customer) and information in it should be written accordingly, at very least it is available to all customers. For these reasons sales may sometimes be involved in decisions as to what goes on this list.

For betas and dev builds etc., maintain separate lists and give access only to those customers who are running those versions, make them feel special, but also your SLAs etc. will be different for non-production versions and this helps keep that clear.

Finally, release notes - fixed issues and known outstanding issues. If you provide public downloadable demos, or you provide demo installs to sales prospects, then these need to come with release notes, so these documents go (at least) beyond your customers. For that reason you don't just dump (any of) the lists above into the release notes, each issue needs to be vetted for inclusion (see above for relevance, plus you want to inform not overwhelm) and rewritten to show your customer service in a positive light - this is effectively part of your shop window.

I would recommend trying to get sales (and maybe marketing) involved in reviewing (and editing) release notes, they are technical documents that need to be customer relevant and readable, sensitive to commercial needs and priorities, and hence need some sales/marketing input.

Yes, really, honestly, and my reasons are:

[stated reason]: This needs your (sales) input because we've written it as a technical document and we may not be aware of all the commercial priorities and sensitivities and may not have used the best customer-facing language, you guys are better than us at this
[real reason]: I want to make sure you sales guys have read the ****ing release notes and aren't out there demoing year-old versions and apologising for bugs we fixed months ago

If they don't want to engage with this, then a) you tried and b) they can't complain about it later...

Comment Re:Here's what I'm noticing on a web server (Score 1) 236

Check your distro. Believe RedHat at least have newer allegedly more complete patches - although there are also posts saying you'll need a support contract for some older versions of RedHat.

My guess is that initially this was embargoed while GNU/FSF guys "fixed" it, but when we find out fact that the fix was no good everything is already public so now everyone downstream is trying to fix it themselves as fast as possible - RedHat etc. may do their own without waiting for upstream. It's now that you find out how good your support is, from whoever you paid for it or from the guys who work for free if you paid no one.

If you want the diy approach, personally I would look at the initial patch for where the offending code is and rip out the entire "feature", I suspect very few scripts will break.

Alternatively, patch bash using rm.

Comment Re:Heartbleed comparison might be a bit overblown (Score 2) 236

I think the bigger than heartbleed comments might be a bit overblown. If you're running CGI and your web server is running as root, you probably have a serious problem. Otherwise, it's just a PITA when you have other things to work on. Can anyone tell me why the script kiddies are scanning 23? I haven't seen an open telnet port in years. From all the reports I've read, about the only things vulnerable are web servers running CGI which was pretty vulnerable before.

ssh is also vulnerable (often configured to pass TERM and maybe LC_*)
Git repos using locked-down ssh - vulnerable
dhclient - vulnerable
CUPS - vulnerable

Those are just the ones we know so far.

It is different to heartbleed, and it could indeed get bigger. Heartbleed required zero effort to get the data but then needed some luck and some work to find keys or login info within it. Manual work required before you could break into a system.

ShellShocked on the other hand is direct, trivial, remote code execution. Attacks can easily be fully automated - it is wormable, trivially. The question now is not whether or not fully automated worm attacks will happen or when - they already are - it is how many servers they will get.

And that is all before we start looking at all the embedded Linux out there that may have bash, and often have cgi. It may take longer to find exploit vectors for those, but it will also take longer to patch, and some may not be patchable. Add in to the mix that small business / home routers often also run dhcp servers, and that dhclient is vulnerable, and a router compromise can open up any unpatched client machines inside the firewall.

No one knows how big this _will_ get, but it _could_ be bigger then HeartBleed, easily.

Slashdot Top Deals

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...