Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:so what is porn? (Score 1) 310

How, exactly, do you expect to identify "child porn"? - do you think emplying millions of minimum wage types in a third world country to deep-packet inspect your IP connections will achieve anything useful? (I guess it might work as an anti-poverty measure!

Or maybe you want to block all jpgs with more than 8 pixels? Or "all videos will too much flesh colour" (suggested on talk radio recently) - how do you define skin colour exactly?. What about ASCII porn?

Submission + - Eye strain and ADHD 1

Anne Thwacks writes: Once you are over 40, the lenses in your eyes become hard. The means your eyes lose the ability to “accommodate” — that is, the ability to adjust their focus from far to near If you are naturally long sighted, you will need to use reading glasses. If you are naturally short sighted, you probably needed glasses anyway – but will now need to wear them more often, and may need varifocals or several pairs of glasses for different distances.

If your glasses are not perfect, and possibly even if they are, the lack of accommodation means your eyes have to strain to focus as the eye muscles pull on the inflexible lenses. I find this straining does often enable me to see things for a while without wearing my glasses, but eventually becomes quite painful. As a consequence of this, I have an increasing tendency not to actually even try to focus, but instead to try and guess, possibly relying on occasional glances.

I am now finding that the pain of focussing to achieve fovic vision is not just leading to me “not looking” but is being associated with paying attention, and I am finding it hard to pay attention, not just for long periods, but at all, because some part of me fears the pain.

Is it possible that one cause of ADHD is poor eyesight?

Comment Well the other thing (Score 3, Insightful) 814

Is we need to define what we mean when we ask for someone's sex or gender on a form. I think part of the problem is different people identify what it means differently. Some in the transgender community say it is 100% about what you personally choose to identify as. So you could be genetically male, have an XY chromosome set, and biologically male, as in have male genitals and body structure, but identify yourself as female and that's what you should mark down. However other people might disagree. If you went in to the woman's dressing room at a rec center the biological women in there might not be at all comfortable with that since they identify you as male, due to your biology.

So one of the things we need to do is clarify the terms, and perhaps have different terms for identifying someone's genetic structure, biological makeup, and sexual identity.

Like when you are talking to a doctor, the genetic definition matters. Reason is that health issues do NOT affect both genders equally, and it has nothing to do with appearance or identity, it has to do with genetics. So even if you've had a sex change operation and all that, proper identification as genetically male could be relevant to medical providers.

For most people it is more about biology, as in what bits do you have between your legs. We visually identify people as male or female, and most are pretty clearly one or the other. That is one of the reasons it gets asked for lots of forms of ID is to help ensure that the ID is for the person holding it. For that, we might want to use your biological appearance. If you undergo a sex change surgery, then you change that identifier.

In terms of the pronoun you wish people to use to identify your gender, that really is up to you, though you need to understand it can be confusing to people if you appear and sound different than you identify.

So as you say we need to review why the information is collected, and then define terms to say what sort of thing we are talking about. We can't just say "Well let people identify as whatever they want," since reality doesn't work that way. However if you are just collecting it for no real reason, then don't and let people identify how they wish.

Comment Sooo... You know you can get non-wifi bulbs right? (Score 1) 401

You can have nice, efficient, LED bulbs with no WiFi in them. Go to Amazon, Home Depot, pretty much wherever you like. The Philips L-Prize bulb is the one I'd recommend. Very nice spectrum, more efficient than most other LEDs, long life.

Or I suppose you could just whine on Slashdot about a product that isn't on the market yet.

Comment The problem is that you see different ones spec'd (Score 4, Informative) 107

Wire based Ethernet is spec'd at MAC layer throughput. It is talking about the data rate of Ethernet frames, the 8b/10b encoding overhead is already accounted for and all that. So you discover that, particularly with Jumbo Frames, you get real near that speed in actual throughput.

Wireless Ethernet, not so much. You find that effective throughput, even under basically ideal conditions, are way less than the listed speed.

So it leads to confusion for people. Basically wireless is over advertising the speed.

Comment He may be right though (Score 1) 339

Remember that the "overclocking" of the non-K chips actually isn't. Turboboost is precisely controlled. Intel gets to spec how high it can go, and under what conditions (thermal, electrical, etc). So the chip is tested and rated to work at that, they know the variables. With K series OCing, that is all out the window. The user gets to get all the variables. They can set max wattage draw, how fast it can go regular and turbo, all that shit.

Well, maybe that kind of thing causes problems with these technologies. I can for sure see it with VT-d, since that is offering hardware access to VMs.

I have trouble believing Intel is doing it to be dicks. They like people buying the K series CPUs, more money for them. They only offer the K series at the high end so it isn't like people buy them instead of buying the higher end chips, they ARE the higher end chips. If anything, I would expect intel to do the opposite and disable it on lower end stuff to try and get people to spend more.

Comment No, they don't (Score 3, Informative) 339

More cores are useful if, and only if, you have software threaded out enough to use it. Some workloads are, many are not. This "OMG moar cores lol," attitude is silly, and to me reeks of fanboyism. "My chosen holy grail platform does this, therefore everyone should want it!"

Also more cores aren't necessarily useful if things over all are too much slower. For example, you'd expect a T1100 to be faster than a 2600 at x264 encoding. I mean it is all kinds of multi-threaded, and the T1100 has 50% more cores. Maybe the FX-8350 too. While it isn't 6 core, it does have 8 modules so 8 threads.

Well, the reality it that they are not (http://www.anandtech.com/bench/CPU/27). The T1100 and FX-8350 are behind pretty much all modern Intel CPUs. An i5-2400 beats them out. Despite the core advantage, the speed disadvantage per core is too much.

But go ahead and keep telling yourself that you are the only TRUE kind of computer user because you care more about cores than actual performance.

Comment Ya, pretty much (Score 1) 464

You seem to think this is somehow an amusing contradiction, but it isn't. It was my entire point: I have not had to buy a complete new system in like 8 years, yet I still have current hardware. The reason is I keep upgrading pieces. There is almost nothing in it that is original. The case is, but that's it. Everything else has been replaced at least once, most things more than once. However that is doable. That's upgradability. When you can upgrade any component, without needing to upgrade the others.

Comment And you are going to do that on what space? (Score 1) 372

One of the problems with "very large video files" is they are, well, very large. This lil' PCIe SSD isn't (480GB likely). So you'll be needing external storage, since there aren't drive bays, and then you are back to where you started. Also with video files, you need enough speed to stream them in realtime, more doesn't make it magically better. Unless you are doing 4:2:2 uncompressed or something, you don't need that kind of throughput. REDCode is only like 42MB/sec, AVCUltra is 55MB/sec max. A regular SATA 3 SSD is enough to easily stream 6+ of them. At that point, your system will be swamped anyhow with the decoding, you'd probably build proxies for editing.

Also it is rather amusing that you bring up video since anyone who has something like an AJA Kona, Blackmagic Decklink, MOTU HDX-SDI, Avid Nitris DX, or the like is straight fucked. No PCIe slots. So you get to rebuy your hardware if you can get it in Thunderbolt (like the HDX-SDI) or you get to go and find something new if you can't (like the Kona).

This is NOT some well reasoned design to make video pros happy. This is Apple wanting a new toy to wave around and say "Oooo, look how fast this is!" For most uses, useless. If you actually have the need for that kind of speed, you probably also need more capacity than it can deliver. That and PCIe was the chones interface for most video gear, and either PCIe or FW for audio. None of that to be found, so you get to get new gear on top of a new system. Well isn't that fun.

Comment I've been telling people this for some time (Score 2) 372

Heck, in this thread even. SSDs are all more than fast enough for today's usage on desktops. They aren't the bottleneck. With the lower latency, and good random access, they all seem to work well.

There's a difference between synthetic benchmarks and what you notice on the wall clock, and just because it is faster doesn't mean it is needed. Another area you see it is RAM. DDR3 scales up to 2133MHz by the spec, and you can find stuff of to 3000MHz. The Sandy/Ivy bridge controllers support RAM speeds async with the CPU bus, so it can scale up. When you drop a synthetic RAM speed test on it, you see the results. The faster RAM scales nearly linearly, as you'd expect. However then you test actual computation, including synthetic CPU benchmarks, and the difference vanishes. Anything past 1600MHz makes essentially no difference and even 1333MHz->1600MHz isn't that big. The RAM speed just isn't the limiting factor on the CPU.

That's what people need to understand about any data access kind of benchmark: There is such thing as enough. Once whatever processing you are doing isn't limited by it, more doesn't help. Now as processing speed increases, so can bandwidth requirements, but at a given level, you can hit "enough".

SSDs really are that point (past it really) for desktop tasks. You just don't wait on them. They can get data as fast as is needed, if there's any waiting it is on other things.

So while I don't hate on faster SSDs, I don't care either. I've played with RAIDing them, I've used fast and slow ones, none of it matters in terms of how long it takes for things to happen, or my ability to work in parallel. SSDs are just faster than I require.

Now this is not true in all applications, you can find server setups (NAS, DB, VM, that kind of thing) where indeed an SSD might not be fast enough and you need more than one ganged together, or you need them on a faster interface like PCIe or maybe FC.

Even then, SAS is advancing. HGST has 12G SAS SSDs on the market, and that'll get you 1.2GB/s of throughput, and do on in a hot-pluggable, RAID-able, setup and with more drives. There are reasons to want to hang drives off of a storage bus rather than right on the system bus.

Comment Ummm, I kinda doubt it (Score 2) 372

While the speed sounds impressive on paper, SSDs are really already going beyond what is needed for storage speeds. You can try this by upgrading from a SATA II to SATA III SSD yourself. I've done that, and I even went from a slow one (WD SiliconEdge Blue) to a fast one (Samsung 840 Pro). Actual difference in system performance? Eh, I doubt I could tell you which was which in a blind test.

The big numbers are mostly dick-waving in a desktop setup. I think the advantages offered by a storage connector and controller are likely to outweigh speed.

Also please note SAS 12g is coming out soon, and that means SATA at the same speed is soon to come as well.

It just really isn't that big a deal on the desktop. For SANs, databases, other high performance shit? Sure, there are cases where you need more IO or iops then you can get out of a SAS interface and then PCIe or the like may be an answer. But for user systems, SSDs are already more than fast enough, additional speed gains don't seem to translate in to wall time gains.

Comment Re:It'll do more for ReactOS (Score 1) 438

To people that are not teenagers, 12 is not old - some of us drive cars a lot older than that - and cars sit outside in the rain all the year round, whereas a PC is in a nice warm dry place.

I occasionally work for a cash and carry depot whose computers are almost all over 11 years old. They probably don't know what the Internet is, I think their POS software is well named (POS meaning something other than point of sale in this case).

Slashdot Top Deals

What good is a ticket to the good life, if you can't find the entrance?

Working...