Comment Re:I wish them good luck. (Score 4, Informative) 647
Slight correction:
NetBSD and FreeBSD were developed independently in the 90s, and mostly in parallel.
OpenBSD forked off NetBSD.
DragonflyBSD forked off FreeBSD.
Slight correction:
NetBSD and FreeBSD were developed independently in the 90s, and mostly in parallel.
OpenBSD forked off NetBSD.
DragonflyBSD forked off FreeBSD.
OpenBSD is hoping to do just that.
Exactly. You need to separate infrastructure (installation and maintenance thereof) from content (that goes across said infrastructure).
There's only 1 set of power lines going into each building, yet there are multiple providers/sources of electrons in the grid. Most power utilities are split into generation and delivery businesses. Customers can even choose where their payments go (to "normal" power generation, "green" power generation, etc).
There's only 1 set of gas lines going into each building, yet there are multiple providers/sources of natural gas in the grid. Most gas companies are split into generation and delivery businesses. Up here in BC, Canada, we have multiple gas suppliers to choose from, depending on whether we want a fluctuating rate or a locked-in rate. All goes over the same gas line infrastructure.
Back in the day, there was only 1 set of phone lines into each building, but you could get phone services from multiple local or long-distance carriers. You could even get Internet service through multiple dial-up ISPs or ADSL ISPs. Separate businesses from infrastructure and content.
Some cities even had 1 set of cable lines going into a building, but you could get TV service from multiple cable companies.
We need to separate IP infrastructure (1 set of fibre cables into each building) from IP services (multiple ISPs, IPTV, IP-whatever companies).
It's absolutely rediculously redundant and wasteful to have every ISP running their own lines into each building (cable, ADSL, fibre ISP1, fibre ISP2, etc). And it's anti-competitive as all get out to have a content company managing Internet infrastructure.
What are you dropping it on and from how high?
My G2 has been without a cover for 10 months now. Dropped several times on laminate floor, carpet, linoleum, and carport cement (2-4' drops). Been kicked across the floor and the driveway. And went tumbling down the stairs twice so far. Only permanent damage is two small divots on the very edge of the bezels ( in the extremely thin silver band just outside of the glass screen).
This has been one of the sturdiest phones I've had, and the first touchscreen phone I've kept out of a case. The only one stronger was my first cell phone, a fortified Panasonic TX-220.
Except that Google Voice only works for Americans. If you don't live in the US, it's extremely difficult to get a GV number, or to use GV.
Skype works on any Windows/MacOS computer, virtually any iOS, Android, or MS phone device, some consoles, and probably other devices. Even if you can't phone a landline using it, you can still connect with people using it.
A better comparison would be Google Hangouts which can be used to:
- send/receive SMS messages on cell phones
- send/receive instant messages on cell phones, tablets, chromebooks, laptops, PCs, etc
- make voice or video calls between Hangouts users
- make voice calls to landlines within North America for free (and other countries for pennies)
Google Hangouts is quickly overtaking Microsoft Skype in features, although it's still building it's userbase.
Standby time is irrelevent. Turn the screen off, put it in airplane mode, and it will last a month. Turn the phone off completely, it will last for years. Never take it out of the box, and it will last indefinitely.
How long the battery lasts when you don't use it isn't really a metric worth debating.
How long a battery lasts while you use the phone on a regular basis is what matters. And 4 or 5 hours of SoT isn't anything to brag about. Not when the LG G2, Droid Razr Maxx, and similar phones get 7+ hours of SoT.
4-5 hours in 2014 is pathetic.
4-5 hours screen-on time isn't that great. That was the benchmark in 2013 before the LG G2 came out. It's normal to get 7 hours SoT with the G2, and more if you tweak things or use it as an ereader.
The number of days of standby time is irrelevant. My phone lasts 3+days if all I do is check the odd text message; it's amazing how long you can drag out 8 hrs of screen time if you don't actually turn the screen on.
Runs what faster than Chrome? JavaScript? Nope. HTML rendering? Nope. Loading web pages in general? Nope. Starting up from disk? Nope.
Firefox used to be the lean and mean alternative browser. Then Chrome came along and showed everyone just how slow and bloated Firefox has become (which just shows how slow IE is).
The only thing Firefox has left as positive features are extensions and plugins. In every other way, it's been surpassed by Chrome, Safari, and sometimes even IE11+.
You gotta love marketdroids. "Max" means there's nothing greater, yet they have "Max Plus" and "Max Turbo".
Where do they find the people who dream up these names? And why do they still have a job? Did the Street Fighter naming crew get picked up on contract here?
"I want the max download speed you have."
"Okay, would you like Max, Max Plus, or Max Turbo?"
"Uhm, what?"
Until this past school year, we had 20-odd elementary schools running off 4 Mbps / 0.77 Mbps ADSL links.
We even had a handful of sites running off 2 Mbps point-to-point wireless. And one site running on an E1/T1 (1.5 Mbps).
And all of them chomping at the bit to get iPads, Chromebooks, and Android tablets into the school.
We gave up waiting for the province (who manages school Internet connections) to upgrade their connections (there's about a 3-year wait list). Especially once we learnt their "next generation Internet" recommends E10 (10 Mbps) for an elementary and E100 (100 Mbps) for a secondary school.
Many years ago, we were part of a city-wide initiative (that fizzled out after 2 years) to run fibre to all admin sites, secondary schools, and city buildings, so we have gigabit fibre links between our school board office and the in-town secondary schools.
This past year we've been putting up Ubiquiti point-to-point wireless links between elementary schools and secondary schools. This has upgraded their connections to 100 Mbps (with 60 Mbps actual). Still part of the provincial network, and it's freed up a lot of money for us to be able to upgrade the few out-of-town sites and sites without line-of-site to another school.
4 Mbps is not "broadband" by any definition. And 10 Mbps is barely "broadband" for a single-family household. 25 Mbps needs to be the minimum definition for a family dwelling, and 100 Mbps should be the minimum for any kind of school or multi-person building.
It's only expensive because you are doing it wrong.
Instead of having multiple different providers all running their own copper, cable, and fibre into each building, duplicating the work, it really should be handled like a proper utility.
There aren't 6 different power cables running into your dwelling, even if there are multiple power providers in the county/state/country. There's 1 cable that multiple providers use.
There aren't 6 different water lines running into your dwelling, even if there are multiple water providers in the county/state/country. There's 1 set of pipes that multiple providers use.
There aren't 6 different gas lines running into your dwelling, even if there are multiple gas providers in the county/state/country. There's a single gas line that multiple providers use.
Thus, there really shouldn't be multiple copper, cable, and fibre lines into your dwelling. There should be only a single set of fibre that goes into the building that multiple providers can use to send bits to/from your house. Terminate them all in multiple central locations in the city, and let the different Internet, video, TV, phone, whatever-over-IP providers rent space in them to stick their equipment in, and just run patch cables and vlans between them as needed.
It's time to treat IP connectivity as the utility it has become, and to centralise the infrastructure for it. There's no need for each individual ISP/content provider to run their own infrastructure around the country. Stop duplicating the infrastructure. Run fibre once and be done with it.
I'm leery of systems that automatically restart services when they crash, especially if the service just crashes again at startup, and you get into an infinite loop that eventually runs you out of disk space with *.core files.
If you need a system to be up that often, it's much nicer to setup a fail-over system or a cluster, where it doesn't matter if individual daemons or systems are running, so long as there's another to take it's place. Then you have time to investigate why things are failing on one node, and can implement a proper fix.
Auto-restarting crashing daemons is not a feature. It's a band-aid over top of poor system administration.
Is it a hard dependency on logind the daemon? Or a dependency on the logind d-bus interface?
Kwin_wayland has a dependency on the logind d-bus interface, for example. And there's at least one project that implements the logind d-bus interface (don't remember the name of it off-hand). Thus, it's possible to run Kwin_wayland on a Linux distro without systemd installed
Boot speed-up is a decent goal, but it should be the last goal, not the first.
My biggest issue with all these parallel boot setups is diagnosing issues at boot time. There's no way to guarantee that two boots will be identical. Boot up and daemon startup are no longer deterministic, and it takes a lot of voodoo hand-waving to diagnose issues with systemd, upstars, and other parallel boot managers.
At least with upstart and Debian's parallel boot setup you could flip a switch to serialise the boot, thus making it deterministic. However, by serialising things, you tend to avoid issues, not solve them.
Not to mention, on most servers and a lot of desktop, the longest part of the boot process is the POST, BIOS, device detection, option ROM loading, and other init stuff that happens *before* the boot loader is run, and long before the init process takes over. Whoop-de-doo, I shaved 10 seconds off the "boot loader, kernel, daemons" process! Never mind that it takes 2 minutes to get to that point.
The K1 in the SHIELD Tablet uses standard ARM Cortex-A15 cores, not the Denver CPU cores detailed in this story. Very different beasts.
Happiness is twin floppies.