Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:too expensive (Score 4, Insightful) 308

This is a way to have a movie theater in their town without driving an hour. You need to factor that into your estimations.

If I lived an hour away from any other movie theater, I would pay $20/month to keep my local theater alive. Sometimes it's fun to see a movie on the big screen, with your friends.

If that experience isn't something you care about, there's Netflix.

Comment Cree and me (Score 5, Informative) 421

A year ago, I had no idea who "Cree" might be.

Then I bought one of these:

http://www.fenixlight.com/viewproduct.asp?id=151

It's the best pocket flashlight I have ever owned. Bright and useful on "low" power (32 Lumens) and very bright on high (105 Lumens). 500 minutes of light (over 8 hours) from a single AA cell on low, or 110 minutes on high. (I'm trusting the manufacturer's numbers here, but I can verify that it actually is bright and lasts a long time.) Anyway, that's a Cree LED, and it doesn't have the horrible bluish tint of older LEDs I have bought in the past.

More recently I bought an Ecosmart light bulb at Home Depot. "Ecosmart" is a Home Depot house brand, and uses Cree LED chips. For $10 I got a light bulb that claims to give equivalent light to a 40 Watt incandescent bulb, but seems brighter than that (I think because it's much more directional; it's in a downward-facing fixture so that's fine).

http://www.homedepot.com/h_d1/N-5yc1v/R-202188260/h_d2/ProductDisplay?catalogId=10053

And just two days ago I got a fixture that retrofits a 6" can fixture with an LED light. I bought one with the 2700K color temperature, because I like that better than the "colder" lights (bluer, which actually have higher color temperatures). I walked into the store planning to just buy a bulb for my can light fixture, and now I'm very glad I bought the whole Ecosmart fixture. I found an LED light geek web site, and the guy bought one of these just to do a teardown; he found 5 Cree LED chips inside it. Where I live, the power company is subsidizing these lights, so I only had to pay $20 for this light. This dissipates only 9.5 Watts, yet it's very bright. I love the design: it includes three spring fingers to hold it into place, but if you rotate it the fingers collapse and stop holding it. So two decades from now when the LED stops working, it will be easy to remove.

http://www.homedepot.com/h_d1/N-5yc1v/R-202240932/h_d2/ProductDisplay?catalogId=10053

So now I want to see Cree make some sort of flush-mount ceiling fixture. I have only found a few flush-mount LED fixtures, and they are all super expensive and I can't find the 2700 K color temperature. I did find one promising looking cheap fixture, but on eBay only and it's an import from China... I have no way to be sure of the quality, other than just buying one and trying it.

My current plans are just to install some fixtures that have air gaps for circulation, so I can use the Phillips LED bulbs (omnidirectional, not directional like the Ecosmart ones). I'm going to install one of these tomorrow and see how we like it. In case the URL doesn't work right, this is a "Project Source 2-Pack White Ceiling Flush Mount" from lowes.com.

http://www.lowes.com/pd_394606-43501-87822-01_0__?productId=3745415

Based on my experience with these lights, we are just on the cusp of these becoming mainstream and common. I've been buying these because they are subsidized, but electronics always gets cheaper over time, and within a couple of years or so LED lights should be cheap enough without subsidy that everyone starts buying them. (Even without the subsidy, they make sense long-term versus incandescent bulbs. If you have incandescent lights, consider LED rather than compact fluorescent.)

P.S. I haven't bought these, but I wish the office where I work would buy them. These are Cree replacement lights for standard fluorescent fixtures. Some companies are making LED lights that are the exact size of a T8 fluorescent bulb, with matching pins; for $60 or $80 or so each bulb, you can replace fluorescents (but you must rewire the fixture to bypass the ballast; these bulbs want mains power directly). Anyway, Cree made a complete fixture, with the elegant design feature of a heat-sink that is on the underside of the fixture (a black strip down the center) so that it is constantly in the circulating room air. Cree claims the payback period is less than a year; when I do the math it seems quite a bit longer than that, so I'm not sure what their assumptions are, but according to Cree these LED lights use 75% less power than T12 fluorescents, look better, and last at least a couple of decades (no bulb replacements needed) so I do believe they will pay for themselves within a reasonable time. The YouTube video is why I want the company for which I work to install these...

http://crseries.creeledlighting.com/

http://www.youtube.com/watch?v=WOpHmjDyMPE

Comment A tie means Intel loses (Score 4, Insightful) 163

I have said it before: with ARM, you can choose from multiple, competing chip vendors, or you can license the ARM technology yourself and make your own chips if you are big enough; with x86, you would be chaining yourself to Intel and hoping they treat you well. So, if low-power x86 is neck and neck with ARM, that's not good enough.

Intel is used to high margins on CPUs, much higher than ARM chip makers collect. Intel won't want to give up on collecting those high margins. If Intel can get the market hooked on their chips, they will then ratchet up the margins just as high as they think they can.

The companies making mobile products know this, and will not lightly tie themselves to Intel. So long as ARM is viable, Intel is fighting an uphill battle.

Comment Re:1.25v DDR3, but CPU efficiency... (Score 4, Interesting) 128

The i7 3770K has a TDP of 95W.

I know that, at least in the past, Intel used to issue TDP numbers that represented "typical" heat, while AMD used to issue TDP numbers that represented worst-case heat (which is what TDP ought to be IMHO). I have read here on Slashdot that more recently, AMD has started playing those games as well.

But according to NordicHardware, in this case Intel is under-promising and over-delivering, and the chips really do dissipate only 77W despite being rated for 95W. (But how did they measure that? Is this a "typical" 77W? I guess it's not that hard to run a benchmark test that should hammer the chip and get a worst-case number that way.)

Curiously the AMD processors tend to stack up better on the Linux benchmark suites.

This is probably because Linux benchmarks were compiled with GCC or Clang rather than the Intel compiler. The Intel compiler deliberately generates code that makes the compiled code run poorly on non-Intel processors. The code checks the CPU ID, and the code has two major branches: the good path, which Intel chips get to run, and the poor path, which other chips run.

http://www.agner.org/optimize/blog/read.php?i=49

The irony is that Intel, by investing heavily in fab technology, is about two generations ahead of everyone else, so they can make faster and/or lower-power parts than everyone else. This means they could be competing fairly and win.

But because Intel does evil things like making their compiler sabotage their competition, I refuse to buy Intel. They have lost my business. They don't care of course, because there aren't many like me who are paying attention and care enough to change their buying habits.

If you want the fastest possible desktop computer, pay the big bucks for a top-of-the-line i7 system. But if you merely want a very fast desktop computer that can play all the games, an AMD will do quite well, and will cost a bit less. So giving up Intel isn't a hard thing to do, really.

Comment Re:It won't be a smooth distribution of versions (Score 1) 298

Wow, thank you for the fact check. It just goes to show that subjective memories aren't the best guides. I was sure that the gap between 2.3 and 4.0 was much longer than ten months! This is probably because I pay more attention to available devices than to software release dates.

The changes between Gingerbread and ICS were big, and the rollout took quite a while for pretty much all brands of phone. Once a phone company successfully adopted ICS, it must not have been nearly as hard to upgrade to Jelly Bean, because updates came faster.

Comment It won't be a smooth distribution of versions (Score 2) 298

Android 2.3 "Gingerbread" was the newest phone OS for a long time, because it was followed by Android 3.0 "Honeycomb" which was only for tablets. A whole bunch of phones shipped with Gingerbread.

After a long time Google released Android 4.0 "Ice Cream Sandwich" and then, after a much shorter time, Android 4.1 "Jelly Bean". ICS was a big enough change that the phone companies were a bit slow to roll it out, with many phones shipping with Gingerbread and a promise that ICS would be provided as an update. Early adopters made an effort to get new phones, but most people kept on using their existing phones (which after all still worked).

Thus I would expect Gingerbread to still be a large chunk of the Android phones in current use, with ICS or Jelly Bean as an ever-growing segment. I've seen articles claiming that the large amount of Gingerbread still in use is a "problem" or a "failure" but I don't see it that way.

At this point, new phones no longer come with Gingerbread so over time the old phones will be replaced with ICS or Jelly Bean.

I don't think we can learn anything useful about the merits or weaknesses of Android 2.x versus Android 4.x by looking at market share. It's almost purely related to what was available and when. Early adopters always want the newest, other users mostly just buy a new phone when they need one and take whatever system the phone is running.

But I will say that there is no way the Galaxy SIII would be as popular as it is if it were saddled with Gingerbread.

Comment Re:Even if this was true... (Score 1) 1009

why would any "enthusiast" go for an ARM CPU with about one tenth of the power a current Intel CPU has? I call this story b/s.

You could try reading the fine article. If you did so you would learn that he is talking about overclocking enthusiasts, who are now having fun overclocking ARM chips. He linked an article about Android running at 3.0 GHz (on an OMAP chip rated for about 1 GHz).

Why will overclocking desktop chips end? From TFA:

there is a very good chance that Broadwell's successor, Sky Lake, will bring back a socketed CPU. Unfortunately it will only be for a generation, possibly two, nothing permanent. By then, the last remaining overclockers and experimenters on the PC front will be gone, and for good technical reasons. Increasing integration will make this minor backpedalling step a rather moot point, there won't be anything left to tweak, and any headroom will have been screened out at the fab prior to fuses being blown. Worse yet, margin requirements will effectively make it not worth extreme cost.

Overclocking first got everyone's attention when a 300 MHz Intel CPU was easily overclockable to 450 MHz. That was a really significant overclock; a cheap part, stable when overclocked, and a huge performance boost. It is harder to get excited about overclocking some recent CPU chip from 3.2 GHz to 3.3 GHz. But oh hey, some guys have overclocked an ARM chip from 1.5 GHz to 3.0 GHz... does the story still sound like "b/s" to you?

Also, the quoted section says that "any headroom will have been screened out at the fab" implying that there won't be any overclocking potential in the new chips. I don't quite understand exactly how that will work, however. The reason overclocking works is that CPUs at high clock rates cost more, which means fewer are needed at the highest clock rates; yet the lower-cost slower CPUs are basically the same silicon, just not guaranteed at the higher speed. In other words there is a supply of chips capable of higher speeds, being sold more cheaply as a lower-speed part. How does improved screening change this situation? Unless Intel has some way of making the chip yields come out to exactly the number of fast chips they need, it seems like there will always be a chance to find a chip that could have been in a more expensive bin. It hardly seems likely that Intel will just shred chips that overperform their bins!

Now, Intel could be implementing actual anti-overclocking measures, but that's not how I interpreted the quoted text, so as I said I don't quite understand.

Comment Re:Really? Woz? (Score 4, Informative) 333

MS typically makes shitty clones of popular products.

That oversimplifies the situation to the point of not being a useful statement.

Windows 3.11 was an ugly clone and copy of the Mac.

No, both the Mac and Windows were attempts to make something similar to the Xerox GUI system (that both Jobs and Gates had seen). And in those wild and woolly early days there was a lot of cross-pollination between the Windows and Mac worlds.

At the time, the Mac was hands-down more beautiful, more elegant, and more polished. Windows 3.x was partially burdened by a bunch of GUI conventions invented by IBM called "CUA" (Common User Access); this is why the shortcut for "save file" was not Ctrl+S, but rather Shift+F12 or something like that.

I'm sure there is stuff in Windows that was on the Mac first, but it is hardly accurate to say that Windows 3.x was a "clone" of the Mac. Heck, I think it was 1987 before Mac OS could even do color, and Windows was full color all along. Windows always had menus on each window, Mac always had a top-of-screen menu bar. All sorts of differences.

Netbios was their poor attempt of copying VMS networking technologies.

I don't know anything about this so I will take your word for it.

Word was a copy of Wordperfect.

Good grief, no! Where are you getting this? Word was originally released with the so-called "multitool" interface, a weird sort of menu system. WordPerfect was designed to be used mostly via the function keys (and everyone had little function key overlays to remind them what Shift+Control+Alt+F9 did and all the others). WordPerfect used embedded codes and had a "reveal codes" command; Word used properties that were attached to characters, paragraphs, sections, or styles.

Here's a primary reference: My mission: write the world's first wordprocessor with a spreadsheet user-interface. It took five years to repair the damage.

Word for Windows was available before there ever was a WordPerfect for Windows, so I don't think your claim makes sense in the GUI world either.

Excel was bought and was a cheap clone of Lotus.

Just as Word evolved from the "multitool" version of Word, Excel evolved from Multiplan, Microsoft's first spreadsheet. Per Wikipedia, Multiplan was first sold in 1982, and Lotus 1-2-3 came out in 1983. Excel was not bought; you are mistaken on that point.

Multiplan and Excel were nothing like Lotus 1-2-3; Borland tried making a menu-compatible spreadsheet that actually was like 1-2-3, and got sued.

IE a buggy clone of Netscape etc.

Microsoft licensed a browser called Spyglass Mosaic and customized that into IE 1.0. Spyglass Mosaic was sort of based on NCSA Mosaic, the first popular web browser ever. In no sense can either Mosaic be considered a clone of Netscape, given that Netscape 1.0 was also based upon NCSA Mosaic!

Probably as IE evolved, it copied stuff from other browsers. That happens. IE also pioneered stuff, a lot of stuff we don't really want (remember ActiveX?).

Comment Re:What is Linux Mint? (Score 1) 295

Let me rephrase and see if that helps you understand my point.

MATE is a fork of GNOME 2.x. It should work just as well as GNOME 2.x ever did (barring bugs). But people have moved on from writing to the GNOME 2.x API; the new apps are written to the GNOME 3.x API.

Well okay then, just install the GNOME 3.x libraries alongside MATE and run the new apps. That will work, at least for now. But if the apps feature integration features with the desktop environment, they won't work with a GNOME 2.x flavored environment (i.e. MATE).

Well okay then, MATE isn't really a frozen fork, it's being developed; how about porting it to the GNOME 3.x world? Well, that would be a huge amount of work. And the longer the GNOME guys keep adding features to GNOME 3.x, the wider the gap becomes and even more work to bridge that gap.

But wait, the GNOME 3.x guys made it possible to write plugins and such and adapt the GNOME 3.x desktop... if you did that, you could make something that works just like GNOME 2.x, but is native GNOME 3.x, so it would naturally inherit the changes to GNOME as it evolves. That sounds like a nice future-proof solution. Okay, that's Cinnamon.

So if you want a smoothly polished desktop now you can run MATE now, and you have the option of transitioning to Cinnamon later if MATE becomes too outdated. If you want to run nothing but GNOME 3.x programs and you want a pure GNOME 3.x desktop, you can run Cinnamon right now; it's not as polished but it does work well.

I hope this clears things up for you. I do understand that MATE is being developed, but I believe it will never seamlessly integrate with GNOME 3.x technology, and I think that once Cinnamon becomes smooth and polished, MATE development may slow or cease. But I am very glad that they took the trouble to build MATE for us; it's a perfectly good solution for at least the near to middle term, and possibly forever.

I've used both, and I would rather use either one than use GNOME Shell or Unity.

Comment What is Linux Mint? (Score 5, Informative) 295

Linux Mint is a distribution of Linux that is based off of Ubuntu. Like Ubuntu, it uses Debian packages.

When Ubuntu made the decision to make a new desktop environment ("Unity") and the GNOME project made the decision to make a new desktop environment ("GNOME Shell"), Linux Mint in turn made the decision to support those of us who loved GNOME 2. We have two options: MATE and Cinnamon. Both are well-supported by Linux Mint (and in fact primary development on both is by Linux Mint guys).

MATE is simply a fork of GNOME 2. For reasons that are not clear to me, GNOME 2 and GNOME 3 cannot co-exist on the same system... something about library conflicts. (Doesn't Linux have library versioning that should make it possible to avoid these conflicts? Eh, moving on.) The MATE project did a mass rename on everything in GNOME ("libgnome" -> "libmate", etc.) so MATE can co-exist on the same system with GNOME 3. So, those of us who loved the smooth polish that came from man-decades of development in GNOME can still use it.

But MATE isn't the future. From what I have heard, the library underpinnings of GNOME 3 really have improved over GNOME 2, and the new technology is a step up. Who wants to be locked into a frozen clone of GNOME 2 forever? Thus, Cinnamon. Cinnamon is a project to build on top of GNOME 3 and provide a user experience similar to GNOME 2. New plugins, new themes, etc. all go together to make a very usable desktop; but GNOME 3 apps will work seamlessly with it.

Many disgruntled Ubuntu users have abandoned Ubuntu for Linux Mint. Mint is now the top Linux distribution on distrowatch.com; I'm not sure it was even in the top ten before the whole Unity/GNOME Shell fiasco, but now it's number one.

A comment I have seen multiple times on Slashdot from different people: the Linux Mint guys are focused on making their users happy, rather than making something new. Where the GNOME Shell guys promise a "consistent and recognisable visual identity", and Mark Shuttleworth (the head Ubuntu guy) said "This is not a democracy. [...] we are not voting on design decisions.", the Linux Mint guys promise that you will "Love your Linux, Feel at Home, Get things Done!"

Linux Mint has always focused on making a beautiful system that is out-of-the-box usable. Now they are one of the top choices for people who have rejected Unity and GNOME Shell.

For me, the most important part of the announcement is that they have the password keeper working right now. I'm using Linux Mint on a laptop at work, and I can't connect to Windows shares; I'm hoping the new updates will sort that out for me.

Since this is based on Debian packages, I can probably just update in place without needing to do a full re-install.

P.S. One of my biggest complaints about GNOME 3 is that I can no longer take sit a Windows user down and just say "it works pretty much like what you are used to". You may like GNOME Shell and you may think it is better, but you cannot argue that it is very different, and it would take a bit of training before a guest could use it. Linux Mint, on the other hand, works a lot like pre-Windows 8 versions of Windows; with a little customization and theming I'll bet you could fool people into thinking it was actually Windows XP.

Likewise with Unity, it is pretty different from Windows. But it's very similar to the Mac, so maybe users familiar with the Mac can use it?

Comment Re:What's the clear advantage of LLVM? (Score 2) 360

on most cases I checked, the result [with Clang] executes between 5-25% slower.

I recompiled a large audio processing code base in Clang and the result was about a 2-3% speedup, with no problems. I immediately switched to using Clang for all release builds. (I still use GCC for debug builds.)

Comment Re:HP Proliant MicroServer N40L (Score 1) 320

If you're not using blades which rack mount which is probably the best thing to do,

Sure, makes sense. The MicroServer might actually cost a bit less, but if every other server is racked, why make an exception?

you're going to want something smaller than that for purposes like time server.

How many time servers do you need? This thing is 8" by 10" by 10". It's not that large, really.

But okay, your objection to its size is duly noted.

Comment Re:HP Proliant MicroServer N40L (Score 1) 320

It's almost literally an order of magnitude larger than it needs to be.

Does it "need" to have room for four standard 3.5" hard drives? Personally, I like using RAID, and I don't think one hard drive is enough. For my purposes, yeah, it "needs" those hard drive bays. That's why I bought the thing.

This thing is designed with a standard 5.25" drive bay instead of a laptop optical drive; you can argue that this is just wasted space, if you like. I might put something other than an optical drive in there, though... I'd love some sort of slot for hot-swapping hard drives, to be used for data backup.

This thing also can take PCI express cards, but I'm going to claim that the space for that is exactly as large as it needs to be and not one cubic millimeter larger.

How can this thing could be made one-tenth the size, without giving up any functionality? It can't, because it isn't literally an order of magnitude bigger than it needs to be.

Now, if you are going to claim that hard drive bays, a 5.25" drive bay, and PCI express card slots are all useless, and that you could replace this whole thing with a single PandaBoard, then sure you could make something a tenth the size of this thing. So if all you "need" is an SBC with a NIC and some ROM, then you don't need something this big. However, that also doesn't make the enclosure less cunningly designed.

So, if you still want to sneer at the enclosure, now it's your turn. Provide specifics on how you would design the thing differently, and what tradeoffs will result from your design changes.

steveha

Slashdot Top Deals

"Show business is just like high school, except you get paid." - Martin Mull

Working...