Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:What? (Score 1) 148 148

Exactly.

To me, streaming is a long, hard, complicated way to replace a HDMI cable. Hell, even a £30 cheapy radio-transmitting HDMI works better most of the time.

Controllers are wireless nowadays and every PC, laptop, tablet, etc. has DVI (and thus HDMI) or direct HDMI connectivity. I very much doubt the common ground between "PC gamer", "Running on ancient VGA-only machines", "Doesn't want stuff in the living room but wants to play there" and "Has many spare devices" is much of an intersection.

Comment Really? (Score 1) 148 148

Microsoft abandoned Games for Windows Live because it could not compete - even plugged into exclusive AAA-rated games, people hated it and then developers started REMOVING it from games that already had it.

Let's not even mention that many of the old Microsoft games are on Steam already. They could have easily made those XBox / Windows exclusive, but then they'd have precisely zero of the profit they see now.

Sorry, but MS is not a threat in the gaming arena. With Desura dead and Origin what it is, there's only one serious player and a handful of minors when it comes to PC gaming. Steam already has all the features necessary to stream to or even run on consoles (it already does, no?), and there's more tiny cheap devices to do just that en-route.

I bet Steam make more in a year than Games For Windows Live ever did.

Comment Re:Disappointed (Score 0) 209 209

God, really? What don't you get about this?

THE SHEER PHYSICS INVOLVED MEANS HUGE POWER CONNECTIONS. What can fit in a Cat5e-thick cable at 240V needs a fucking iron bar to carry the current at 1V. DC requires even thicker iron bars than AC because it arcs like fuck.

You can't shove an iron-bar's worth of current through a microcontroller with teeny, tiny traces without things catching fire. Even if the logic is abstracted (e.g. UPS), still something somewhere has to shift enormous iron-bars carrying current around. Look inside a UPS at the size of the capacitors, the width of the traces, the amount of stuff just put through physical relays to isolate real-current-carrying stuff from the controller circuitry, and the sheer thickness of copper in the battery leads.

Sheer physics says that what you want, something that can take account of all situations, means you have to design to the highest spec in order to not start fires. And that means a huge stonking great, very hot, dangerous device.

Honestly, I can take out a room by shorting a decent-sized car or UPS battery. Fuck knows what happens if you get an arc inside your multi-convertor device if it develops a fault.

There's a reason it doesn't exist, for the same reason that 10GHz chips don't exist. Simple laws of physics.

What the bollocks was about nobody daring to suggest building microcontrollers for the things they are used for today, I have no clue. That's EXACTLY what microcontrollers were specifically built to do and have been doing since their invention.

P.S. Discrete logic runs generally slower than IC because of one very simple thing - the speed of light. Look up any of the wirewrap/homebrew "computers" made from discrete components of ordinary size. They are severely limited because the length of their traces (on the order of centimetres rather than micrometres/nanometres) means that signals get out of sync before you get past 8MHz or so. This is the same reason that you don't have 10GHz chips yet, but can have lots of multicore 3GHz chips in the same package.

Comment Re:A much more efficient air conditioner, too? (Score 1) 209 209

An A/C capable of cooling a car or boat isn't really comparable. First, they move through outside air when in use or are submerged, so cooling is aided quite a lot. Second, they barely have to cool the equivalent of one room in a house. Maybe more rapidly, but that's just a case of brute power.

However, the volume covered means they aren't actually cooling at max power for more than a few minutes anyway, and the sealed interior of a car means they can dial-down quite quickly. They also don't have to worry about killing the battery at all, they are effectively "plugged in", and I'm not sure how long they'd last on battery - certainly it would be much worse than leaving your lights on!

Applying such things to a household running on solar? I think there's going to be more than a few small problems.

Comment Re:Disappointed (Score 1) 209 209

Compare size of fixed-voltage adaptors, to those able to cope with 110v/240v output.

Compare size of fixed-output adaptors to "multi-output" adaptors. Some laptops ones are tiny. All the "generic" adapators are huge.

And that's with a handful of options, maybe 2 options on the input and 3-4 on the output. Now combine sizes. Now add in an input capable of the DC (yes, you can Wheatstone bridge the AC, but that's got to be using diodes big enough for anything you put on). Now add intermediate paths capable of the MAXIMUM current that goes through it, as DC, all the way, down to the minimum voltage (i.e. if it can output 1v, that's going to give 12 TIMES the current that outputting at 12V would give, needing seriously large cables and intermediate circuitry for any practical purpose (even 30W, at 1v, would give 30A ratings... 30A DC @ 1V needs 120mm thick cable for a 1m run).

Even adjusting for "reasonable" voltages up/down, you're also expecting up and down conversion (i.e. mains to 12V and also 12V to 48V), which is rare to see in the same device.

Basically, the thing would be as huge and heavy as a car battery, get fucking hot, be prone to failure, and require internal circuitry and insulation from the power paths that's just not practical. And it would probably be quite inefficient across whole ranges of voltages.

You can't have some tiny 5v chip controlling a variable output of that kind of size without some huge, specialist "break open a UPS and see the kind of size we're talking about" components in it, with thick paths between them.

Honestly, the first thought that comes to mind is "fire", quickly followed by "fucking expensive". There's a reason that you set a standard and follow it and try not to change that standard (hence why half the world refuses to change to the other half's mains voltage). The conversion equipment is then static, and makes it practical. And there's a reason that 100+ Volts emerges as the winner every time, because you can get decent power down a decent sized cable.

Hell, even 12V DC / 240V AC input devices are "rare" in modern life compared to everything else - motorhomes and marine use, pretty much, and expensive and sometimes several years behind the technology. And internally all they do is convert to one or the other, so it's really just a TV and a convertor in a box wired to the same 12V DC input on the actual TV (which almost certainly then boosts the voltage for it's LCD panel) because that's easier to make than some generic device that can take in anything and shove it where it wants in whatever format it wants.

Fuck, even inverters or voltage convertors (110/240V) as standalone devices for generic use are bricks that weigh a ton, cost a lot, and have very limited power output.

Comment Meanwhile... (Score 1) 305 305

This UK citizen would rather under-18's didn't do shit and/or post about it online such that might later affect their lives.

Take some fucking responsibility for yourself from about the age of 10/11, as the law states, and if you cock up, learn to live with the consequences.

Sure, we'd all like a time machine that could erase certain mistakes but why the hell should we legislate some cyber-time-machine that actually removes indiscretions posted publicly?

Not only that, it just won't work. You can't just erase facts forever. And if you could, the infrastructure capable of doing that is more open to misuse from adults than anything else I can think of. I bet there are certain UK politicians at the moment that would like to conveniently forget paedophiles in the Houses of Parliament, not to mention the last drug-taking, prostitute-hiring British peer in charge of parliamentary ethics.

Comment Re:Eventuality? (Score 5, Insightful) 549 549

Hint:

An article about yet-another-buyout / possible closedown of the site gets 150-ish comments, most of them crowing on how bad DHI have treated us.

I'm pretty low-numbered nowadays, yet I used to be the "newbie" on here.

The Reg gets more comments per article and has a lot more articles. Even SoylentNews gets not-much-less than Slashdot does and that's basically a startup Slash-clone.

Site is not what it was, it would be quite a trick to bring it back now.

Comment Sell it cheap. (Score 1) 549 549

AKA "Fuck, we can't make money by just buying an established site and then trying to shove our shit into it without consideration of our userbase".

To be honest, I'm already elsewhere - even paying to go elsewhere in some cases. Slashdot is just nostalgia nowadays. Partly because of the various DHI "user monetisation" cockups.

Comment Re:Most people won't care (Score 1) 104 104

The size and complexity of modern CPU's mean that you don't stand a chance at getting a no-backdoor assurance for anything useful.

Down on this scale, on microprocessors and IC's, it's possible but incredibly difficult. If you were serious about no-backdoors (e.g. military), you'd not be using an off-the-shelf American product. You'd be describing the chip you want to build and validating the end-product against your design. Because that's the ONLY way to be sure.

There is absolutely no guarantee that your phone, the box in your phone cabinet, your network switch, your router, your PC, even your TV does not have these sorts of backdoors (especially if you consider dormant-until-activated backdoors on devices). We've reached the complexity where it would take far too long to validate any one design.

As such, if you want that sort of assurance you have little choice. Antiquated, tiny, powerless chips at best. You might be able to validate a Z80, but you wouldn't even get close to, say, a decent ARM chip at a few hundred MHz - even with the designs able to be licensed (the NDA's associated with licensing such things would probably stop you talking about any backdoor legally anyway...).

If you want a modern PC, you literally have no choice. The chipset on your motherboard is so complex as to be unauditable for an end-user, or even a skilled professional. We can just about decap and understand some 80's arcade game chips, and then only if they are simple and of certain types. Some of the protection / security chips are still complete unknowns from that era.

You can care all you like. What you can't do if even knock up a Raspberry Pi competitor without having to spend inordinate amounts of money and using proprietary components that you can't inspect somewhere along the route.

Comment Re:Wait, what? (Score 1) 57 57

You're missing the point (though the practical implications are the same).

The check on whether the code was valid was only run if the user typed a code into the box. Typing in random letters wouldn't validate. Typing in a valid code would.

It was an oversight that the checks existed but never actually took place in the case of null, not that they were not capable of validating codes.

As such, rather than just "Let's make up random codes and then ignore them and validate anything", the thought process was "Let's generate codes, validate them properly, but oh shit, we forgot to validate this path".

Although the results are the same, the implication that they never intended to use the code for checking against is wrong. As such, it appears to be a coding oversight which allows an authentication bypass, rather than deliberate laziness masquerading as security.

Comment Updates (Score 2) 316 316

Automatic updates are fine in principle.

But every update breaks 1% of the things it hits. It's as simple as that.

For home users, that wasn't a problem, because they have one machine so might survive hundreds of updates before anything goes wrong.

On networks, it's a damn nightmare. Even with homogenous environments, you're looking at one thing broken every update, or thereabouts.

The problem with forcing auto-updates is that it doesn't solve the reasons people turn auto-updates off. The main reason? People have suffered breakage like this of previously perfectly working systems. And to the point they get BSODs or complete failures to boot, not just "oh, something's slightly slower or they moved an icon around".

To a professional environment, it's a 10-minute re-image. To a home user, it's days without the machine while they pay someone to look at it, who does two seconds work and charges a fortune, for something that they aren't likely to understand (and if they tried it themselves, might well end up breaking more than they fix).

It's the wrong way round.

I get that you want to keep thing secure, but breaking graphics drivers for EVERYONE isn't the solution there. In fact, more of a risk is some virus getting on the machine and crippling auto-update anyway. I see that as the only way for the virus to survive any length of time - if it allows random patching then it's entry method will fix itself.

So, auto-patching by default doesn't solve the problem there - malware will still stop them happening and so persist security risks. But users who are following all the guidelines are getting BSOD's and crashes and unbootable computers because of the quality of the updates, not to mention the junk shoved into them (malware scanners, adverts for the next version of Windows, etc.). That's just backwards.

The one thing that annoys me about any software is lack of choice. Why CAN'T I have the old start menu back if I want? It's really not that difficult to supply it as an option. I will go out of my way to reintroduce those options if necessary. I don't care what you want as the default, I care about being able to select MY CHOICE.

And that's what they are planning with Windows 10 updates - removing the choice such that you can't stop a known-bad update propagating to your machine unless you spend lots more money on enterprise-level versions of the OS and dedicate a server to the task. Given the number of bad updates pushed out in just the last year, it's a disaster waiting to happen.

I can, and will, find the option to disable it, just because you MADE me do so. If you'd just put the option as default (like it's always been) but allowed me to disable, I could at least say "Woah, there's a dodgy update for Windows 10 making the news - I will stop it until I'm sure MS has fixed the problem". The alternative is really VM'ing it and rolling back - and if I'm going to have to do that, fuck Windows, basically.

It's a nice sentiment, but MS has proved that it can't be trusted to not put tons of junk into "critical security updates" which it doesn't label properly (and puts in adverts for Windows 10 that you then struggle to rid yourself of into such updates). As such, I can't leave them to make the decision as to what's critical for security and should be forced to my machine, and what's not.

And if an nVidia driver - whether or not it can be fixed by a clean install - might just one day get forcibly updated and cock up a machine, that's not something I want to have on a games machine which has only the barest of connections to the net behind a firewall. It really doesn't need all the latest Windows Updates if all it is is a games machine with, say, Steam, and doesn't download third-party shit and just plays games and goes out on a handful of high-numbered gaming ports. Especially if the risk is some random nVidia driver being shoved onto the machine and breaking it (hell, some drivers for nVidia will ramp up the temperatures etc. on your hardware because it mis-supports them!). I could almost VLAN the damn thing off my network entirely if necessary.

Sorry, MS, but I can only hope this is one of your stupid "Let's announce something ridiculous and then recall it the day before release so we can say we 'listened to customer concerns'" announcements.

I'm going to be getting a lot of extra work from friends and family if this goes ahead and, to be honest, I just don't want that.

Comment Re:Okay but using a typical browser for download (Score 1) 118 118

Your use case is very last decade.

Nobody downloads single files over HTTP for anything serious. Half my users don't even understand what a ZIP is, and those that vaguely do think of it only as a folder.

Streaming, multiple cloud servers, torrents, etc.

A Gigabit isn't to have a 1 second download. It's to have multiple downloads simultaneously at the speed that only one download can enjoy now. Hell, even web browsers download multiple things in parallel from a website nowadays,

Comment Re:So you have it.... Now what??? (Score 1) 118 118

60Mbps = 7.5MBps.

Not sure at all that I'll judge my future spending on someone who doesn't get this.

Gigabit has tons of uses and don't equate "ISP's" with "consumer-only ISP's". Businesses will happily pay for Gigabit speeds, therefore small businesses will do too, therefore work-at-home people like graphic designers or similar will do too.

It's not a question of whether the hardware can take it (the ISP's can always supply compatible hardware because nobody knows what the fuck ADSL2 vectoring, or DOCSIS 3 is, so the ISP has to supply sufficient shit anyway). It's a question of is the value there? Are there limits? Is it available? What's the install cost? How much are you paying per Mbps? etc.

60Mbps is fine but lots of people need more. It would take you two days to download my steam folder alone. Put several kids in the house, a mother who works from home, a father who mirrors the family photos to his brother's house, etc. and you're fucked.

There's not really an upper limit on what Internet speed I would like. Maybe one endpoint can't flood my connection (I wouldn't want it to) but nowadays there's a lot more than one device online, one user online, going to one destination.

Fuck, on 60Mbps (which IS more than I currently use at home myself, but because of cost nothing else) it would take me all day to sync my Google Drive to a new device, for instance.

Comment Nope. (Score 4, Insightful) 114 114

Tomb isn't a successor to TrueCrypt, for me at least. Not even close.

TrueCrypt's selling point is NOT an encrypted container. We can do that any number of ways, not least just encrypted loopback, but all of them leak the same amount of information.

Truecrypt's selling point was full disk encryption and a bootloader that hook BIOS interrupts to allow live, in-memory, OS-agnostic transparent decryption. That's not something you can do with a shell-script.

Anything not full-disk-encryption is worthless is the machine is stolen - it probably takes minutes to find the key in swap-files and unlock the containers if they've been used recently. The plain-text is probably still lurking around on disk as temporary files etc.

The only reason I used TrueCrypt was that you could full-disk encrypt and nobody could get in without modifying the hardware of the machine and then getting me to enter my passphrase. Not something that a thief was going to be able to do. It means it was Data Protection compliant, that you could afford to lose the entire machine and not worry, and that it didn't matter what you did with the machine underneath, what OS, what partitioning, etc. even fake partitions with false copies of Windows, etc. in them.

Sorry, but your slashvertisement is exactly what it says - a shell script around some basic command line utilities. It's nowhere close to a TrueCrypt replacement unless your use-case is extremely trivial and - actually - not that secure at all.

As it is, I don't think there's currently a product I can use that I can trust complete boot-time control of, except for TrueCrypt and it's directly-compatible replacements. I will look at various projects as they evolve but, for me, the winner will be whoever gets a UEFI bootloader first.

If you didn't have to work so hard, you'd have more time to be depressed.

Working...