Follow Slashdot stories on Twitter


Forgot your password?
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Re:Looked slick, but so unstable (Score 5, Insightful) 282

Yeah, but that instability was not entirely Win95's fault.

Back then computers had almost no resources. NT had a "proper", academically correct OS design with a microkernel architecture (until NT4). It paid for it dearly: resource consumption was nearly double that of Chicago. Additionally, app and hardware compatibility was crap. Many, many apps, devices and especially games would not run on Windows NT. Microsoft spent the next 6-7 years trying to make NT acceptable to the consumer market and only achieved it starting with Windows XP.

So Win95 was hobbled by the need for DOS and Win3.1 compatibility, but that is why it was such a huge commercial success.

Making things worse, tools for writing reliable software were crap back then. Most software was written in C or C++ except often without any kind of STL. Static analysis was piss poor to non-existent. If you wanted garbage collection, Visual Basic was all you had (actually it used reference counting). Unit testing existed as a concept but was barely known: it was extremely common for programs to have no unit tests at all, and testing frameworks like JUnit also didn't exist. Drivers were routinely written by hardware engineers who only had a basic grasp of software engineering, so they were frequently very buggy. Hardware itself was often quite unreliable. Computers didn't have the same kinds of reliability technologies they have today.

Most importantly nobody had the internet, so apps couldn't report crash dumps back to the developers, so most developers never heard about their app crashes and had no way to fix them except by doing exhaustive, human based testing. Basically that's what distinguished stable software from unstable software: how much money you paid to professional software testers.

Everyone who used computers back then remembers the "save every few minutes" advice being drilled into people's heads. And it was needed, but that wasn't entirely Microsoft's fault. It was just that computing sucked back then, even more than it does today :)

Comment I remember ..... (Score 5, Insightful) 282

.... the Briefcase!

I just can't remember what it was for.

Win95 was such a huge upgrade. We forget now, but it packed an astonishing amount of stuff into just 4mb of RAM (8mb recommended). If someone produced it today in some kind of hackathon it'd be praised as a wonder of tightly written code. They even optimised it by making sure the dots in the clock didn't blink, as the animation would have increased the memory usage of the OS!

It's surprising how little Windows has changed over the years, in some ways. Not because MS didn't want to change it but because the Win95 UI design was basically very effective and people still like it, even today.

Comment The chrony web page has some nice comparisons (Score 3, Informative) 157

The Chrony comparison page compares ntpd, Chrony and OpenNTPd. Another yet to be finished alternative is ntimed (which seems to currently be around 6000 LoC). On some Linux's if you don't care about accuracy or trying to weed out false time you can always use an client such as systemd-timedated.

Comment Linux Foundation trying to work out who to give to (Score 1) 157

The Linux Foundation has already given funding to a few open source projects it considers "core" (which includes the original NTP project) and has been trying to assess which other core products are most at risk. From looking at the members page, at least two of the companies you mentioned (Google, Facebook) are part of the Linux Foundation so the giving back has at least started...

Comment Re:Exploit? (Score 1) 42

That's good to hear. These exploits often don't seem to be as bad as initially suggested. No big surprise there, I guess.

This Google Admin app bug doesn't seem to be a general sandbox bypass as the summary implies. It's not even a bug in Android. One app by Google to let people admin their custom Apps domains will open a URL with an embedded webview, if asked. So then perhaps the embedded web view can be used to exfiltrate files from the Admin app. But are there any sensitive files there to be stolen? The advisory does not say, so I expect the answer is "no".

Well, any OS that lets apps talk to each other can have this sort of issue - it's like blaming Windows for a bug in Firefox. Makes no sense. Probably good for getting attention though.

Comment Re:Then you don't quite get a number of things (Score 2) 457

3b. I've noticed the memory issue. I've also noticed a lot of java programs seem to have a hard time going beyond 1GB of ram. I'm sure there is a way to make them do that but... I've had to screw around with work arounds more than a few times to deal with that issue.

There are people happily using Java with 300 gigabyte heaps. Look at the Azul Zing JVM for examples of this. Also: they're using it in ultra-low latency financial trading apps. Just because you haven't seen this sort of thing personally doesn't mean it never happens.

As to your claim that it isn't slower if it has enough memory... That's not my experience. I'm sure I could get you testimonials and links to people talking about Java being slow. But I rather suspect you won't listen to it or will say it is invalid for some reason.

Performance is complicated. There are lots of cases where a Java program is just as fast as a C++ program or even faster. PIC-optimisable virtual method dispatch in a tight loop is one example of where Java/JVM stuff has stomped C++ for many years, with devirtualisation optimisations only appearing very recently in GCC stuff it seems. HotSpot is an excellent compiler and can do a lot of interesting things.

Moreover, it's not like for any program there's a choice of Java or C. Many developers use languages like Ruby or Python. It turns out that there's an advanced research JVM which allows you to co-compile Ruby and the C source code of Ruby/MRI extensions together with performance that's radically faster than the original code.

But mostly, people use Java because the performance is good enough, and the benefits over the C/C++ ecosystem are big. For instance, you get reliable debuggers, stack traces that are never corrupted, no manual memory management, ultra-fast compiles, a huge and standarised package repositories/dependency management system, high quality profiling tools, lots of libraries etc.

We have less bullshit to deal with the compiled programs. They just "work" more reliably.

I don't doubt your experience but it has nothing to do with AOT compiled vs JIT compiled. Applets that stop working on newer JVMs are probably relying on bugs in earlier versions. This can happen any time there is dynamic linking. Every time I upgrade MacOS X some apps I use stop working properly, even though they're all compiled. Apple just isn't very good at backwards compatibility.

Comment Re:You are talking about 2001-2004 technology! (Score 2) 457

It was not "pretty amazing". I have written J2ME apps. It was a disaster zone, mostly for policy not technical reasons.

Problem one: its conformance testing was crap and the licensing for the upstream implementation was expensive. So, guess what, phone OEMs made their own. And did it badly. EVERY J2ME phone was full of bugs, often incredibly basic and obvious bugs like camera APIs that leaked every image taken (take three photos in a row->OutOfMemoryError), or drawing APIs that crashed the device if you tried to draw a bitmap to negative coordinates (correct behaviour is to clip).

This meant that in practice you had to test every version of the app on every device, because bugs were so common.

Problem two: it was tiny. Almost every API was optional, and Java has no good support for on the fly adaptation to missing APIs. So apps ended up needing a C style macro pre-processor to customise the app for every combination of bugs and missing features. You think Android is fragmented? I rofl in the face of Android fragmentation, because I've seen J2ME's equivalent.

Problem three: the CLDC VM was unbelievably sluggish, even compared to the early Dalviks.

Problem four: many APIs were protected by a code signing requirement that was painful to meet and often very expensive for no good reason. Forget about writing free hobby apps.

Problem five: no app store. Every carrier ran its own, and if you wanted distribution ....... yup. They wanted money. Often, meetings and contracts too. Just forget it.

Comment Re:This is FUD (Score 1) 111

Doesn't it let you essentially let you find out if you've had (since this boot?) up to 256 bits of entropy? You can ask it whether it has had an amount so long as it's less than 256 bits and you can force it to return failure if you ask for an amount it hasn't yet reached. It's not as generic as what you're asking ("tell me how much you've ever had") but it does still sound close to what you're asking for (albeit in a limited 256 bit form).

Comment Re:It's the base assumption that its invalid (Score 1) 392

There are approximately 2080 working hours per year (52 weeks per year, 5 work days per week, 8 hours per work day).

That doesn't include any vacations or lunch breaks. A more normal figure is 1650 hours. Regardless, we're talking about the Home Secretary. She is one of the most senior ministers in the land and does many different things. She is not a full time warrant approver. There's just no way she can do all her other tasks AND this whilst having any time left over.

But even if she was, what kind of review can one person possibly engage in with less than an hour to examine each warrant? They pretty much has to believe whatever is written on it. It's hard to imagine that this is a robust process.

What the gods would destroy they first submit to an IEEE standards committee.