Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Amateurish (Score 1) 516

I tend to agree about the icons, but I do think flat design is particularly bad in this respect. By its nature, it removes tools that could otherwise be used for distinguishing different types of content, establishing hierarchy, and directing the user to important details.

The Microsoft style of flat as seen here isn't as bad as the more extreme "monochrome line art" version that is plaguing web sites at the moment. Even so, all those subtle lighting-based effects we used to see, and even the not-so-subtle styling of say Apple's older metallic or aqua looks, could serve practical purposes as well as creating a signature style for a platform.

Comment Not just software (Score 2) 347

I always here that software projects are often late and over budget, but I don't think it's worse than any other industry. I've seen countless examples of construction projects that ran over budget and took longer than expected. Often the reasons for this are the same. Either the requirements changed halfway through, or the project was made more complicated than it needed to be to accomplish the task. There's a few bridges in my area that have been huge boondoggles in the past decade, and they all try to look impressive, where a much more conservative design would have be easier, cheaper, and faster to build, and still would have solved the transportation problem. But everybody wants a bridge that looks pretty.

Projects that deal with a small workload and don't have changing requirements are much more likely to stay on budget and on time. This is how things should be broken up. Build small pieces and deliver the pieces as they become complete. Don't set out to build an entire 5 year task as a single project.

Comment Re:1.39B did /not/ "use" Facebook last month... (Score 2) 53

I really don't get how they can report numbers like this and not be called out on it all the time. Just from a quick Google, it looks like there's around 3 billion internet users. I would probably believe that. What they are saying is that almost half of all internet users used their site last month. Considering that Facebook is blocked in China, and China makes up 0.6 billion internet users, it makes the likelihood of them having that many users to be completely ludicrous. I know that there are ways to get around the great firewall of China, but I still don't see how they get that number. I think it has something to do with a lot of duplicate accounts. I know a lot of people who have multiple facebook accounts. Some do it because they have different groups of "friends" they don't want to co-exist, and some people do it for games, so they can gift things to themselves. There's plenty of people with a lot of accounts. I'm sure there's gold farmers in Farmville if you look for it.

Comment Re:Amateurish (Score 1) 516

The thing that really hit me about the screenshot was how crowded it looks. The example is presenting information with a clear underlying structure (a file system) and a small number of actions I can take, and probably half the area of that window is empty space. And yet, my immediate reaction is that there's no clear structure to tell me where to look, and the design desperately needs more visual hierarchy and better use of whitespace.

Of course, this is a recurring problem with the current trend for flat designs, bright colour schemes with limited contrast, and very rectilinear graphics and layout. It's still disappointing that Microsoft seems to be chasing Apple and Google down that blind alley, though, instead of coming up with something more interesting, distinctive, and most importantly, usable.

Comment Re:Problem with this scheme (Score 3, Informative) 109

I agree that they currently make it way too hard to determine which CPU is better than the other. Currently they have 2 things called i3/i5/i7. The i7 that's used in desktops is not the same i7 that you will see in a standard desktop chip. And they also sell small form factor desktops that use the laptop version of the i3/i5/i7. Then there's the lower end chips like Celeron/Pentium/Atom, that I can't figure how how they are supposed to compare to eachother. It was a lot easier when they actually changed the marketing name of the chip each time they actually made a change to the processor. 386,486, Pentium, Pentium 2, Pentium 3, Pentium 4 and so on. They've had the i3/i5/i7 names since 2008, and it's gone through Nehalem, Sandy Bridge, Ivy Bridge, Haswell, and Broadwell all without changing the marketing name of the chip. You have to look at stuff like i7-4770 , or even worse, look up the exact model number (BX80646I74770) to try and figure out exactly what you are getting.

Comment Re:This is hilarious... (Score 1) 270

Protectionism is also a huge factor, China constantly decries other countries as being "protectionist", meanwhile they are one of the most protectionist countries on the planet. The spying just gives them another excuse to claim that they aren't *REALLY* violating WTO rules, they are just protecting themselves(if the spying thing hadn't come up, some other excuse would have)

Comment Re:Watches (Score 2) 141

It really depends on the kind of watch band you have and how tight you wear it. If it's something like leather, cloth, or rubber then you definitely should take it off daily. If you have a metallic band and it isn't completely tight against your wrist, then there should be enough air flow around the watch band to not have any problems. If you have a full metal watch, and wear it to bed, and wear it in the shower, then it should remain relatively sanitary, and you really don't ever have to take it off.

Comment Re:Pesticides for humans (Score 1) 224

The other problem with chlorine is that it's among the cheaper ways of bringing a semblance of sanitation to a municipal water supply.

Really classy first-world jurisdictions can use Ozone systems(which have the advantage of basically perfect decomposition into harmless oxygen by the time the water reaches customers, and need only electricity and occasional spare parts at the treatment plant, rather than big tanks of chlorine); but anywhere else is probably chlorinating the fecal bacteria out of the water supply, which saves a ton of lives(especially if the medical system is lousy); but also means that chlorine is basically just sitting around.

We ran into that issue in Iraq from time to time. Chlorine is a really lousy war gas, barely toxic enough to count as one at all; but just sending a couple guys with guns and a truck down to the water treatment plant could score you enough of the stuff to release in the nearest crowded area for some reliable freaking out and some casualties.

Comment Re:Pesticides for humans (Score 1) 224

I'm no industrial process chemist, so I don't know how different the factories look; but my understanding is that that is part of why the lists of scheduled chemicals, and the multiple schedules, for the Chemical Weapons Convention, are as messy as they are. There are some that we've decided nobody has any legitimate reason to be playing with; but loads of dual-use chemicals.

Comment Re:Pesticides for humans (Score 1) 224

The history gets a little muddled because different classes of chemicals were developed with different primary purposes at different times.

Various primitive fumigants (burning sulfur, various other 'noxious smoke' type stuff) date back approximately forever, and have been used to discourage pests; and also 'discourage' the guys digging a tunnel under your castle; but are pretty tepid war gasses in the open, more suffocating than overtly toxic.

Some of the WWI war gasses were substantially tailored for effect on humans(or, even where previously known, like Chlorine, pretty expensive and annoying to deal with as agricultural agents), though at least the arsenicals also overlapped with pesticide developments.

Nerve agents started as pesticide research(and to this day, the lesser organophosphates are used for the purpose); but(thanks to lousy benchtop practice that nearly killed a few of the scientists involved) it became clear that the peppier flavors were also...eminently suitable...for getting rid of large mammalian pests. Thankfully, in WWII, the Germans overestimated allied knowledge of nerve agents, based on a misreading of the patent literature, and didn't want to risk reprisal. Had this not been the case, V-2s full of sarin would have been technologically feasible, which would have really ruined some days.

Comment Re:How's this any different... (Score 2) 114

There's also the basic difference that 'enterprise' MiTM-ing is potentially kind of a dick move, depending on exactly how hard HQ feels like squeezing somebody's innocent checking of their email over lunch or whatever; but it's a fairly clear exercise of control over hardware by that hardware's owner.

Seeding hardware with malware and then selling it? Not so much. Yeah, maybe there is some nonsense clickwrap EULA; but there is no real consent of any kind, or even a proper warning.

If only for your own sake(having your own employees getting fooled because your MiTM proxy re-signs bogus certs without flagging them would be counterproductive) odds are that 'enterprise' systems are also more competent; but even if they aren't it's a pretty major difference in scope.

In my own admin-ly capacity, playing content cop is something I do reluctantly, and only as much as network security requires; but we never tamper with devices we don't own(deny them access to the network, sure, touch them, never) and staff are proactively warned and welcome to ask in more detail, if they wish, about what we do and why we do it.

Comment Re:But... (Score 1) 261

I saw a recent review of a smartphone that had two screens, one LCD and one eInk. The modern eInk display is able to get a high enough refresh for interactive use and doesn't drain the battery when done. The screen that I'd love to see is eInk with a transparent OLED on top, so that text can be rendered with the eInk display and graphics / video overlaid on the OLED. The biggest problem with eInk is that the PPI is not high enough to make them colour yet. You get 1/3 (or 1/4 if you want a dedicated black) of the resolution when you make the colour and so that means you're going to need at least 600PPI to make them plausible.

The other problem that they've had is that LCDs have ramped up the resolution. My first eBook reader had a 166PPI eInk display. Now LCDs are over 300PPI but the Kindle Paperwhite is only 212PPI, so text looks crisper on the LCD than the eInk display, meaning that you're trading different annoyances rather than having the eInk be obviously superior. With real paper you get (at least, typically a lot more than) 300DPI and no backlight.

Comment Re:amazing (Score 1) 279

The problem here is latency. You're adding (at least) one cycle latency for each hop. For neural network simulation, you need to have all of the neurones fire in one cycle and then consume the result in the next cycle. If you have a small network of 100x100 fully connected neurones then the worst case (assuming wide enough network paths) with a rectangular arrangement is 198 cycles to get from corner to corner. That means that the neural network runs at around 1/200the the speed of the underlying substrate (i.e. your 200MHz FPGA can run a 1MHz neural network).

Your neurones also become very complex, as they need to all be network nodes with store and forward and they are going to have to handle multiple inputs every cycle (consider a node in the middle. In the first cycle it can be signalled by 8 others, in the next it can be signalled by 12 and so on. The exact number depends on how you wire the network, but for a flexible implementation you need to allow this.

Comment Re:Good grief... (Score 1) 681

What's the justification for compilation unit boundary? It seems like you could expose the layout of the struct (and therefore any compiler shenanigans) through other means within a compilation unit. offsetof comes to mind. :-)

That's the granularity at which you can do escape analysis accurately. One thing that my student explored was using different representations for the internal and public versions of the structure. Unless the pointer is marked volatile or any atomic operations occur that establish happens-before relationships that affect the pointer (you have to assume functions that you can't see the body of contain operations), C allows you to do a deep copy, work on the copy, and then copy the result back. He tried this to transform between column-major and row-major order for some image processing workloads. He got a speedup for the computation step, but the cost of the copying outweighed it (a programmable virtualised DMA controller might change this).

I suppose you could do that in C++ with template specialization. In fact, doesn't that happen today in C++11 and later, with movable types vs. copyable types in certain containers? Otherwise you couldn't have vector >. Granted, that specialization is based on a very specific trait, and without it the particular combination wouldn't even work.

The problem with C++ is that these decisions are made early. The fields of a collection are all visible (so that you can allocate it on the stack) and the algorithms are as well (so that you can inline them). These have nice properties for micro optimisation, but they mean that you miss macro optimisation opportunities.

To give a simple example, libstdc++ and libc++ use very different representations for std::string. The implementation in libstdc++ uses reference counting and lazy copying for the data. This made a lot of sense when most code was single threaded and caches were very small but now is far from optimal. The libc++ implementation (and possibly the new libstdc++ one - they're breaking the ABI at the moment) uses the short-string optimisation, where small strings are embedded in the object (so fit in a single cache line) and doesn't bother with the CoW trick (which costs cache coherency bus traffic and doesn't buy much saving anymore, especially now people use std::move or std::shared_ptr for the places where the optimisation would matter).

In Objective-C (and other late-bound languages) this optimisation can be done at run time. For example, if you use NSRegularExpression with GNUstep, it uses ICU to implement it. ICU has a UText object that implements an abstract text thing and has a callback to fill a buffer with a row of characters. We have a custom NSString subclass and a custom UText callback which do the bridging. The abstract NSString class has a method for getting a range of characters. The default implementation gets them one at a time, but most subclasses can get a run at once. The version that wraps UText does this by invoking the callback to fill the UText buffer and then copying. The version that wraps in the other direction just uses this method to fill the UText buffer. This ends up being a lot more efficient than if we'd had to copy between two entirely different implementations of a string.

Similarly, objects in a typical JavaScript implementation have a number of different representations (something like a struct for properties that are on a lot of objects, something like an array for properties indexed by numbers, something like a linked list for rare properties) and will change between these representations dynamically over the lifetime of an object. This is something that, of course, you can do in C/C++, but the language doesn't provide any support for making it easy.

Comment Re:Good grief... (Score 1) 681

Depends on whether they care about performance. To give a concrete example, look at AlphabetSoup, a project that started in Sun Labs (now Oracle Labs) to develop high-performance interpreters for late-bound dynamic languages on the JVM. A lot of the specialisation that it does has to do with efficiently using the branch predictor, but in their case it's more complicated because they also have to understand how the underlying JVM translates their constructs.

In general though, there are some constructs that it is easy for a JVM to map efficiently to modern hardware and some that are hard. For example, pointer chasing in data is inefficient in any language and there's little that the JVM can do about it (if you're lucky, it might be able to insert prefetching hints after a lot of profiling). Cache coherency can still cause false sharing, so you want to make sure that fields of your classes that are accessed in different threads are far apart and ones accessed together want to be close - a JVM will sometimes do this for you (I had a student work on this, but I don't know if any commercial JVM does it).

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...