Seriously - try living elsewhere.
Seriously - try living elsewhere.
Meanwhile, in civilised countries, that's an illegal working environment.
Er... I don't think 300 dwellings is anywhere near real capacity.
In the UK, cable is delivered with DOCSIS (Actually EuroDOCSIS, same thing, slightly different frequencies), and it's by street, and our streets are much smaller than the typical US "block".
It might be 10Gb over, say, 30 dwellings, or one apartment block. But the bottleneck will ALWAYS be the uplink anyway. What would you need to put 10Gb from multiple clients back to the net? Are you honestly expecting some 1Pb connection at Comcast somewhere? Highly doubtful. Caching, proxying, and the fact that people consume in small bursts or little dribbles whatever they are doing (gaming, web browsing, emailing, downloading, etc.). That's why P2P is such a pain - nothing to do with the legality, entirely to do with the fact that you can max out the uplink connections with just a handful of users.
But that's the same wherever you are. Even on, say, a workplace or school network, your uplink probably isn't on a pair with your between-server connections, and certainly only an order of magnitude better than your client connections at best (e.g. 100Mbps to hundreds of clients, 1Gb actual upstream, or 1Gb/10Gb, etc.).
But, still, a 10Gbit connection will download files, reduce latency, browse the web 10 times faster than a 1Gbit. You won't be able to max it out 24/7, that's all. Nothing's changed in that respect in decades.
Hint: When you upgrade your home network from Gigabit to 10Gb, you will need to multiply everything above it by 10 too or you'll get worse performance than before. Please tell me where you're going to buy 1Tbps kit from (even as an ISP) that isn't so prohibitively expensive that you can only afford to do it on major links and not every 100 clients.
We've just about got 1Gbps as "mainstream". 10Gbps is still expensive but is commercially available to all. 100Gbps is a pipe-dream unless you're a datacentre or ISP or huge enterprise.
What on earth makes you think that any algorithm, proof or technique can account for hardware failure of any kind? That's what RAID, etc. are for and are still far from a guarantee.
Plus, kind of the point of a checksum is to ensure the integrity (to a certain probability) of data. If either the checksum or data change, they will no longer match up - short of a billions-to-one random chance that you can't do anything about anyway. Incorporate the flag into the data that you checksum and that's covered.
You cannot mathematically prove that any single bit will ever be written to disk whatsoever. All you can do is prove that you won't lose bits that you've confirmed as written **unless there is hardware failure**. That's it. Hell, we can't even prove that we're talking to a real device rather than an emulated one that just discards random bits.
Write zero to a flag.
Write data to temporary area.
Calculate checksum and keep with temporary area.
When write is complete, signal application.
Copy data from temporary area when convenient.
Check checksum from temporary to permanent is the same.
Mark flag when finished.
If you crash before you write the zero, you don't have anything to write anyway.
If you crash mid-write, you've not signalled the application that you've done anything anyway. And you can checksum to see if you crashed JUST BEFORE the end, or half-way through.
If you crash mid-copy, your next restart should spot the temporary area being full with a zero-flag (meaning you haven't properly written it yet). Resume from the copy stage. Checksum will double-check this for you.
If you crash post-copy, pre-flagging, you end up doing the copy twice, big deal.
If you crash post-flagging, your filesystem is consistent.
I'm sure that things like error-handling are much more complex (what if you have space for the initial copy but not the full copy? What if the device goes read-only mid-way through?) but in terms of consistency is it really all that hard?
The problem is that somewhere, somehow, applications are waiting for you to confirm the write, and you can either delay (which slows everything down), or lie (which breaks consistency). Past that, it doesn't really matter. And if you get cut-off before you can confirm the write, data will be lost EVEN ON A PERFECT FILESYSTEM. You might be filesystem-consistent, but it won't reflect everything that was written.
Journalling doesn't need to be mathematically-proven, just logically thought through. But fast journalling filesystems are damn hard, as these guys have found out.
a) Why would you open a voicemail in a web browser? That's a stupendous security risk. And it would be an audio player, surely, not a browser?
b) What is your carrier doing to deliver voicemail by anything other than their voicemail service?
c) I share your pain somewhat here but: Put your phone on speakerphone when doing voice prompts. It's so much easier and you can ensure the screen doesn't go off. P.S. you have options to delay the screen turning off. Use them if it annoys you.
d) Two web browsers? Choice. You might want to just use Chrome, others might want something else. P.S. Android's "Internet" option is Chrome, just an old version. They don't brand it because they don't want to shove it down your throat but this way everyone has a browser and can STILL choose their own (like, say, Chrome, or Opera Mini, or anything else at all). Compare and contrast to Safari on iPad, etc.
e) Satnav - choice. They haven't said "YOU WILL USE THIS APP", they've given you apps, the carrier have given you apps, you can give yourself apps and choose what you want. Don't moan about choice. P.S. I use Copilot on all my Android devices.
f) Get a better phone if it overheats. If a smartphone overheats, so would anything with an LCD screen or even old school tech. They dial back the speed under heat, not break. If it's breaking your phone is shit or nothing would survive that heat nicely and it's stopping you having to buy a new phone.
Note: I hated smartphones for years and literally never used one until two-three years ago. Bought one Android Samsung, never looked back, stopped my old TomTom subscription/device and moved everything to the one place where I can choose to do everything or nothing. Hell, I can manage my workplace network from it. By far not a cutting-edge "YOU MUST USE THIS" kinda guy, but that seems to be exactly what you're moaning about the lack of. This ain't Apple. You can use / configure what you like how you like.
Someone get me a dictionary of acronyms.
If you're exposing any ports to the Internet that are not absolutely necessary for the general unknown public to communicate with you, you're an idiot.
Web ports? Yes, if necessary.
Email ports? Yes, if necessary.
VPN ports? Yes, if necessary.
Anything else just SHOULDN'T be. And certainly never anything along the lines of RPC, CIFS, etc.
Almost everything you can summarise in a line is bollocks news headlines. Science is, unfortunately, a lot more complicated than that.
We (probably) use all our brain. Just not all on conscious intellectual thought. It's not hard to see that - cut into the brain and you ALWAYS lose something, it just might not be immediately obvious what.
The appendix may well be a store of gut bacteria that reseeds the gut in the case of illness. Which kinda makes sense, the same way you save some of the cheese by-products to help make the next cheese. And also explains why when it blows it's quite so serious - it's basically an inactive mini-gut getting infected and exploding.
It's just that it's hard to prove these things definitively because they were never DESIGNED to do that. They just happen to do so. And so they may be doing ten jobs well or one job badly or no jobs at all and it's incredibly difficult to say which for a global population at any static point in time.
Similarly "junk" DNA is as it says - noncoding. We think. But it might be doing other stuff. Hell, it may just be purely structural, or it may be remnants of old coding, or it may just have got mixed in the same way you accidentally mix in insects into basically every foodstuff you eat (yes, literally) but because it "just works" and nobody notices, it doesn't really matter.
Or, maybe, it's coding is not as simple as we expect. Nobody's every really SEEN things like DNA do their jobs. You can look at it, you can simulate it, but nobody really knows exactly what's going on in the millions of full strands inside a HUMONGOUS cell that replicates billions of times over in the space of a matter of months.
The problem is that science is so complicated that you can't understand it, and headlines are all you pick up. How many moons does the Earth have? Depending on which scientists you ask, and which definition of "moon" you use, it can be zero, one, two, twenty-seven or hundreds. Nothing is as simple as you can explain in one sentence. Or even one article. Or even one research study and paper. Or even one field of expertise.
I think you mean things like MemoryDoubler and stuff that was around at that time.
DriveSpace was purely disk-based. There were products that compressed pages in-RAM, and have been since the DOS days, and still are, and are present on every OS if you look hard enough.
Downloading roaming profiles far outweigh any user-switching or bootup time.
We're on Windows 8. We don't have power-saving (it drives teachers mad that their screens / drives spin down just as they move from class discussion back to the PC), we just logoff and then shutdown at the end of the day.
But the users change EVERY HOUR on the hour, on every PC. So there's not much to linger. However, I guarantee you that roaming profile download takes more time than anything else the machine might do in the course of a day.
I work in schools (in the UK, that means the standard, mandatory education up to 18, nothing beyond that). Most places I have spoken to are wary of 64-bit, even, so they're still technically running on, what? 3.5Gb or thereabouts?
I have 64-bit throughout so I have 4Gb, but I've seen little reason to go past that. Pretty much the bottleneck is network, and if I get the network up to speed (not cheap), it would be server-side (disk array speed, etc.). The clients very rarely do anything that they aren't waiting for stuff from the network to complete.
Next year, I may go 8Gb in the clients but I would predict to see much huger speed increases by just going to SSD on the client (Lifespan under swap conditions? Meh, drives barely last a year or two for us anyway and then we're replacing the whole machine - overprovision and let it loose and suffer a tiny client hard drive for the sake of speed).
I really need cheap 10Gb kit, though - from server down to end-switch. Gigabit to the desktop is okay for now, but it won't be long. But RAM? Hell, 4Gb is fine for basically any business task unless it's a server. There, yes, fuck, you need as much as you can get. I just doubled all my servers RAM this summer, at great expense. But the clients are running Windows, Office, a few apps and a browser and rarely make it through the day without being logged off or shut down. And we do deal with large databases and centrally-stored stuff all the time, but that's for the server to worry about. The clients, however, need next to nothing.
Before we start on the conspiracy theories - ANYONE who relies on a third party to encrypt their stuff is not worried about security. Not really.
And any cloud provider will accept and store encrypted files that ONLY YOU have the key to.
It depends on what you're looking at.
Is this entirely independent research by respected labs? Or is this research in one particular area, by a specialised lab, with a particular sponsor? The latter is, we all know but can't prove easily, biased.
Is this a paper designed to do nothing more than back up a sponsor's advertising claims? Or is it something groundbreaking from a lab with few political ties, nothing to prove, and some serious science behind it.
It all changes the perspective of what a "scientist" is (i.e. someone who work not-for-profit to forward the cause of humanity, versus some guy in a lab coat with a PhD who's sold out?). As difficult as it is for us to distinguish, imagine what that means to the general public. Those people who invent terms for things for shampoo commercials that have fuck all to do with making your hair shiny, even if they sound like that, are held in the same regard as that guy doing genetic research to hunt down some elusive connection is an ultra-rare but devastating condition.
The problem is that there is nothing to distinguish the two, and both are technically "science", and thus can generate papers and be done by postdocs.
Sorry, but this is why I believe that referees on paper should be chosen specifically, why all connections should be documented IN THE PAPER (didn't declare who sponsors your lab? Bam, you're in the shit pile and your paper is forever disregarded), and why anything published should not be accepted until it's been confirmed - ideally by a bitter rival.
Too much shit is published nowadays as "science", commercial crap, ridiculous notions, unreviewed papers, basically anything from arxiv.org, etc. And the requirements aren't strict enough.
My girlfriend's a PhD in a medical field. Her contributions to a book were copied basically verbatim by someone else and published as another chapter in the very same book. It discredits her work, and that of the plagiarist, not to mention the primary author/editor. Things like asking people to reproduce their work under independent scrutiny are the only way to verify what's true, who did it, and how it was done so it can be repeated by future generations. We've lost all that in the last 20 or so years, even in things like Maths, etc. by having people referee their own friend's papers and other crap like that.
96 million balls at $0.36 each?
I reckon you can get a very big tarp, and supports, and structure, for that.
Work is the crab grass in the lawn of life. -- Schulz