Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:It should be (Score 1) 364

Ironically, if you do text and drive, you are likely to become disabled.

Most likely not, actually. Driving is one of those activities where the risk is disproportionate - text/drink/etc and drive and those who suffer are more likely going to be third parties than yourself (who is protected by a myriad of safety devices and the car itself). Especially since a collision generally means you're going forward, and frontal impacts have been well studied and front crumple zones are big and absorb and distribute the energy over a long period of time. (Side impacts are the worst since there's very little crumple zone before intrusion into the passenger cabin happens).

Comment Re:Incredibly bad live stream (Score 1) 730

While I had some problems with the stream for about the first 1/2 hour or so, they eventually got it stable.

  You do realize, of course, that that was likely hands-down, the most amount of streaming viewers of any single internet broadcast, right?

  But I am pretty sure there are meetings going on right now at Apple, and some streaming "experts" are seeing another side of Tim Cook...

It may be the most viewers (it certainly took down apple.com for a couple of minutes at a time when it just returned permission denied), but it's supposed to be flawless.

Apple isn't new to livestreams - this is probably the 4th or 5th they've done. It was started as a way to crash their new datacenter for iCloud - just have everyone with an iOS device or AppleTV go and watch it live and see how many bits they can push out.

And Apple can push the bits out, but their servers started collapsing under the load (which is unusual in an of itself). And the whole Chinese translation thing was completely odd.

And yes, it's something Apple typically prepares for because it's planned out way in advance. So I do find it inexcusable that the live stream had those flaws because Apple prides themselves for practicing, anticipating and handling practically every conceivable situation. They even have "audience member has emergency" as part of the contingency plan and a way to quietly remove them and give them the aid they need without disturbing or disrupting the whole event.

So no, sorry. The people behind iCloud and the livestream do deserve to be ripped a new one for these flaws. Because they are supposed to have everything down pat weeks in advance. About the worst I can excuse is if the stream stops and requires reloading the page to resume because the local CDN got overloaded. But that takes only 30 seconds tops.

Comment Re:Bikes lanes are nice (Score 1) 213

No, they didn't. The total number of lanes declined. In essence, they went from having four lanes, with no dedicated turn lane, to three lanes for most of the street, expanding to four (with one being a dedicated turn lane) at every other intersection. So, for the bulk of the street, the number of lanes declined.

Four lanes with no dedicated left turn lane turns into three lanes when someone wants to turn left.

And add to that the chaos of having to do lane changes because people get stuck behind left turners (and the corresponding people who want to turn left but were in the other lanes to avoid left turners in the previous intersection) means traffic just gets all jumbled up.

Put in some proper traffic lights to help clear left turn lanes so people don't jam it when it fills up and spills into a straight through lane...

Basically all that happened was in order to build a bike lane, they had to reconfigure a bunch of intersections and in so doing also happened to improve traffic flow.

Comment Re:COBOL and FORTRAN (Score 1) 387

by Anrego (830717) * Alter Relationship on Tuesday September 09, 2014 @08:27AM (#47862303)

Is it ever chosen for new projects though? Would there ever be a reason to?

I get that for the type of programs originally written in COBOL it makes no sense to do a complete re-write. Things like accounting and payroll and inventory management are pretty static, and once you've got a working system, why change it.

But does COBOL offer any reason to start a new system with it?

Surprisingly, few new projects do it. But then they implement a whole programming layer on top of whatever the application is written in to do similar things.

So many crappy languages have been created over the years that really are meant to implement business logic without having to recompile the source code or have managers etc. learn C or Java or whatever because they want to implement a 10% discount on card members.

So you see lots of abstraction layers built up and maintained. And it's funny, since COBOL was designed for this - it's designed for managers and all those kind of people to understand. Accountants and the like love it (which is why the financial industry runs off of COBOL) because they can read it and comprehend what it does without being a programmer.

Comment Re:It's not apple this time! (Score 2) 134

I think it plays heavily in to the decision, and the timing. Apple and Amazon are the two brand names that have a content and hardware eco-sphere, so they compete directly in a section of the market. Amazon is intent on taking more of that share, and the Fire phone is a part of the strategy.

The business cases for both are completely opposite, however.

Amazon makes devices to sell content and services. They sell Kindle hardware, Fire hardware, etc., at cost (practically). The goal here is to promote use of Amazon's music, movie, TV, e-book and app stores, which is why they're heavily locked down and discourage use of alternative sources of content. Sure you can sideload, but that's finicky and annoying enough that most users wouldn't bother.

Apple, though, sells content to push hardware. Apple doesn't really care about the content and is more than happy (generally) to let the content industry dictate prices. Sure Apple gets a cut, but that's just storage and bandwidth and handling the transaction and such. (It's not easy to be able to sell something and let users re-download it at will and keep transaction records. Hell, there's a lot of places that charge $5 to let you redownload your product up to a year later). The money for Apple is in hardware. The content merely helps move the hardware, so Apple puts a little effort into selling and promoting the content.

Comment Re:WRONG! (Score 1) 65

You're essentially saying "systems that rely on a key item are problematic. The attacker need only that key thing."
But all systems rely on a key thing. So you're not really saying anything at all.

Except that key thing is highly transient and changes frequently enough that if it's any length of time old, it needs to be verified.

It's why systems often verify email addresses once every year or so - not just to avoid spamming, but to make sure the person their sending stuff to is the same one.

Hell, you run into the same problems in coding today - you see people accidentally re-use file descriptors leading to all sorts of interesting bugs, or even people killing processes by PID only to realize the PID was reused!

Comment Re:These are new systems... (Score 1) 111

My credit card company has just recently send new cards with the microchip.
Now I have seen the chip reader on 80% of the card readers I have seen.
And only Wal-mart has it implemented and working. Target has the new reader, but it isn't implemented.

It probably IS implemented, it's just waiting on the processor to actually flip the switch to enable the chip reader.

It's a bit more involved than just swapping out old hardware with new hardware - the whole operation of chip+pin is completely different. And it's different enough that POS systems that integrate in credit and debit processing software MUST change as well.

Presumably, Home Depot and Target are busy rewriting their charge card processing software to handle the new format - but in the mean time, they can certainly have the hardware down and ready for it. Then there's retraining as well - because when doing stuff like returns and such, often because the POS machine stores the card, it can reverse the charge without the user doing anything more than signing the slip. But chip+pin can't do that, so the system needs to ensure that it can send the refund request properly to the machine and the customer service folks need to be trained to tell the user to insert their card. (And no, it won't accept any card, it generally must be a card on the same account, so if you made a purchase specially on Visa and forget it, you can be stumped when the normal MasterCard you use fails).

The hardware's just the basic part. It's the whole POS integration's that the difficult part. Wal-Mart has it easier because they're always tweaking stuff and policies so they've probably live-beta'd the new changes all the time.

In fact, the retailers most likely to have the new machines working are the small ones, provided they upgrade (usually as part of equipment refreshing or broken replacements), since there's no integration.

Heck, my local comic store has an interesting charge machine - they had their old one replaced with the same model because it broke, then earlier this year, that got replaced with a brand new one with a bright shiny high-res LCD. That apparently is probably running the old software in an emulator because it provides all sorts of status information on the screen, but the actual credit card processing uses large pixelated letters that emulates the low-res old screen. No, it's not simply using a large font, it's actual pixelated text where you can see the individual dots rendered. (And you know it can go finer because the stuff around the emulated low-res LCD is in color and fine text).

Comment Re:Please let it be single-player (Score 1) 266

I hope that whatever Romero is doing doesn't turn out to be Free-2-Play or co-op or with multiplayer focus.

The beauty of his best games was that they were single-player, with some very fun multiplayer as a bonus. The current gaming industry mode seems to be co-op or multiplayer primarily with maybe a very short single-player campaign thrown in.

I understand that this trend started primarily as a way to prevent some kid in Estonia from having a nickel in his pocket that didn't belong to the gaming industry, and I don't fault them because their nature is to be money-grubbing monsters who basically hate their customers. But somehow, the great single-player games managed to make a nice profit. Nice enough to finance a stinker like Daikatana.

Actually, multiplayer was EXTREMELY popular because it turns out people didn't really want single-player. It's been that way since DOOM was released in the early 90s. Remember the early builds that had a nasty habit of taking down office networks? Turns out a LOT of people wanted to play against other people, rather than bots. And it was extremely popular with people willing to schlep their desktop PCs, monitors and all the peripherals to other people's houses in order to do "LAN Parties". (And this was back in the CRT days, so hauling your 17" monitor was generally a big deal).

This multiplayer aspect is what drove DOOM, Quake, etc sales, not the single player campaign. In fact, there's very few first-person-anythings that have (half-) decent single player mode (the Half Life, Halo and Portal franchises come to mind), because most players don't do the campaigns, ever, they just get it and hit "Go online".

So since then, multiplayer competitive generally was the order of the day because tons of people played it.

It was maybe a little under a decade ago that people started taking cooperative modes seriously - seeing that while a good majority of people play multiplayer only, there's a sizable chunk who do play the single player mode and rarely do the multiplayer, or at least do it a lot less than the core group.

The vast majority of people you see who buy the latest shooter basically do it to play multiplayer. Be it against random people on the Internet, or with their friends

Comment Re:SHA-3 (Score 1) 108

Never mentioned anything about ajax or responsive etc, only about support for SNI. Also, but of selective quoting on the part about loosing XP customers, you forgot to include the bit where I said "would rather loose XP based people that those who use the latest Chrome builds etc and won't connect because of security alerts". - in other words if one of those two sets has to be lost for some reason, I would select to loose the older XP set. Obviously it would be best to loose neither, but given a enforced choice then the XP users are toast (and they count for less than 0.5% of our users, so really not going to loose any sleep over that)

There should be no reason anyone's using XP SP2 as that's been obsoleted a LONG time ago in favor of XP SP3, which works just fine. It was only recently that XP SP3 fell out of extended support.

So you're not likely to lose too many people (unless you cater to people running ancient unsupported highly vulnerable versions of Windows). I would think the majority of XP users would be running SP3 for a while now.

Comment Re:at least they have 4 and 8 core models as well (Score 1) 105

IBM addressed this with POWER7 and newer in a fairly innovative way. They have an option called TurboCore mode which turns off half the cores. The ones still running can use the disabled core's caches, and because of the space available for heat dissipation, clock speed could be bumped up. The result was half the cores, but almost the same performance due to the faster clock and cache available.

Intel has it too - it's called "TurboBoost". Basically if the CPU is not under thermal restraint, and the other core is basically idle, then it will go into a practical shut-down state while the clock on the active core is boosted. This is designed less to save per-core licensing and more for single thread performance - if the workload consists of one thread, then rather than have the other core stay there idling for threads that aren't coming, it can be put in low power and the one running real code can be boosted to speed up

Takes into account a lot of PC workloads are still single-threaded but there are a few that can benefit from more cores.

Comment Re:this is a more interesting quote (Score 1) 60

It's not an interesting quote, it's the truth.

People rag on Apple etc., when they only do dual-core, and yet, it's the reason WHY they do dual-core.

Quad-core (and "octacore") are a joke because the power consumption is high. Even at 100mW/MHz (=1W/GHz) which is what ARM does, the sheer speed of modern SoCs mean they dissipate a fair bit of power.

E.g., a 2.5GHz Snapdragon with 4 cores is 10GHz or 10W of power consumption. Of power-efficient ARM at that, which is 10W of heat in a small area that has very little way of being dissipated (there's memory chips over the SoC, and PCB underneath).

You're going to thermally limit quick. I've seen the calculations and it's roughly two cores going full bore 100%, while the third and fourth can only manage 50% to be at the most processing you can have without thermal shutdown. This was running basically right at the max junction temperature with room temperature cooling around it. Add in PoP packaging and enclosed packaging and your max performance is much lower.

Comment Re:BWAHAHAHA (Score 1) 60

The iOS app store is run by goosestepping nazis who are happy to ruin years of development to reject your app because they might be thinking of doing something similar or because they simply don't like it. Or because it actually does something useful and the UI is "too complex" for teeny-bopper facebook junkies.

Actually, Apple's App Store is a perfect model for console development, because Microsoft, Sony and Nintendo have far more stringent requirements. And no, you generally don't hear of them because their development contracts are far more airtight. You sometimes hear of small leaks, like how the developer of The Binding of Issac did a port to Nintendo DS only to have Nintendo withdraw their pre-approval just at release. Or how Microsoft has a $40K certification charge. (No, Sony's charges aren't less, in fact, they're likely MORE. It's just no one dares cross Sony, and Microsoft's throwing money around. ).

And if you're spending millions to develop for Apple without assurance it'll be approved, you're an idiot developer who deserves to lose the money. Because if you're going to throw that kind of money around, you'd have to either be an established developer, or be very rich (and stupid). Established developers generally already have gotten console approvals and know how to step around Apple's generally lax requirements.

Rich and stupid are the ones who throw millions on an app without even trying to see if the idea is viable.

Comment Re:Not just one mobo (Score 1) 102

The problem is not the fix - once you know the problem is power, it's trivial to fix.

The problem is identifying the root cause. Power problems are highly subtle - and usually very intermittent. The FPGAs may crash under heavy load, but it's one of the "phase of moon" bugs because you can feed in the same test patterns that crash it and it'll work the next time around.

And bugs that are impossible to replicate are the hardest ones to fix - especially if it's a new board that requires a new change to the RTL so you're not exactly sure if it's a hardware or software problem. Or even a compiler problem (since half the issues can easily be caused by bugs in the compiler).

So you have a problem, and the problem can be in the hardware, or the software (both the stuff you wrote, and the vendor's stuff), and even on the software running on top of the FPGAs. And you can't replicate it, either, so yeah, it's going to take a LOT of work to find the root cause.

If you're lucky, you can isolate matters by running the current design on the previous hardware (provided that's possible) which narrows down the list of potential causes greatly. But because of the lead time in hardware, that's generally impossible - you just can't run the current software on the old hardware because too many things changed, just as you can't run the old software on the new hardware.

Comment Re:Huh? (Score 1) 60

He's using the NDK, but it's not really standardized. Different phones have different hardware configurations, and when you are doing 3D graphics, different phones have to be programmed differently.

To some degree this is a problem with iPhones, but there are not nearly as many models, so there is similarly less work.

The bigger difference though is the OpenGL capabilities report is more accurate on iOS, or easily worked around. Given there's only 7 (soon to be 8) different GPUs out there (of which realistically, you only need to support 3-4), it's a lot easier if you find a bug since it'll be hit by lots of other devs.

On Android, despite plenty of phones having the same SoC, the video drivers used in them can vary dramatically enough that if you rely on their capability report you'll find phones that work, and a lot of phones with strange bugs, even on the same GPU. You'd think there would be more consistency since there are only about 4 different SoC vendors and 4 different GPU vendors.

Of course, the other thing is the whole quad core thing. It's a cheat - and why Apple keeps using dual cores. Because thermal limits mean you can't get full speed on 4 cores for more than minutes at a time. Basically just long enough to get through a benchmark. At the very most, you can use 2 cores full speed, while the other two have to be used at a max of 50%. It doesn't matter the SoC - all quad-cores suffer the same problem.

Comment Re:Is there any point continuing GCC's development (Score 1) 99

The more I use it, the more I ask myself, Is there even any point continuing the development of GCC?

Well, the "advantage" of GCC is it's GPL. To a large number of FOSS people, the GPL is a far better license than BSD, so it doesn't matter if LLVM/CLang is better, superior or whatever, GCC is GPL and that is all that matters.

Also, there's a lot of GCC-isms out there in the code, many of which are not supported by LLVM/Clang or other compilers and they often do strange things so porting them to use another compiler is often difficult.

And GCC supports far more backends and languages, so it still has a place.

Of course, LLVM/Clang is progressing quickly because of extensive corporate support (being BSD licensed versus GPLv3), and projects like Free/Net/OpenBSD love the fact that the entire base OS can be BSD licensed now rather than having to rely on a non-BSD component.

Slashdot Top Deals

"It says he made us all to be just like him. So if we're dumb, then god is dumb, and maybe even a little ugly on the side." -- Frank Zappa

Working...