Forgot your password?
typodupeerror

Comment: Re:Someone has to be in charge (Score 5, Informative) 641

by phoenix_rizzen (#46662057) Attached to: Linus Torvalds Suspends Key Linux Developer

Kay's been a kernel developer for years, and has clashed with Linux many times in the past, all for the same reasons: Kay patches something, breaks a lot of things, says everyone else has to fix their code to work around the things he broke as it's "not his problem". Linux has finally had enough of that attitude.

Comment: Re:Lighting is decent but not perfect (Score 1) 208

by phoenix_rizzen (#46651687) Attached to: USB Reversable Cable Images Emerge

Those are all issues with the cable and Lightning protocols, not the actual, physical connector.

The problem with MicroUSB and even full-sized USB is that stupid tongue in the middle of the socket that goes inside the plug on the cable. That tongue can be easily broken by moving the device with the cable plugged in. I've snapped that off a phone and a desktop now. And there's no way to fix it without replacing the entire socket ... not easy when it's soldered to the motherboard.

The Lightning socket/plug is going in the right direction. The contacts should be on the outside of the plug on the cable, and along the inside walls of the socket. The plug should be solid (no holes), and the socket should be just a hole (no pins, tongues, or whatnot).

Compare the headphone jack/socket. Connectors around the inner wall of the socket and on the outside of the plug. Plug is solid. Socket is a hole. Impossible to plug it in wrong. Take design cues from that, from the Lightning connector, hell even the old mini Christmast lights got it right.

Pins and tongues inside of sockets that a plug has to go around is just dumb. Didn't we have enough issues with bent pins on VGA/Serial/Parallel/PS2 ports to realise that was the wrong way to do things?

Comment: Re: How are these things related? (Score 1) 202

by phoenix_rizzen (#46605705) Attached to: KDE and Canonical Developers Disagree Over Display Server

You were going on about how X11-over-SSH is so slow, and using VNC is so much better/faster as there's no SSH in the way.

I was saying that SSH is not slow, as we use VNC-over-SSH all the time.

And that X11-over-SSH is not slow, as we use the NX Client all the time (which is X11-over-SSH).

X11 by itself can be very slow over the network. But it doesn't have to be (just look at NX as an example).

Thus, doing things in a similar way to X11 doesn't mean it will be inherently slow.

Comment: Re: How are these things related? (Score 1) 202

by phoenix_rizzen (#46605557) Attached to: KDE and Canonical Developers Disagree Over Display Server

What's funny about that is that we use VNC-tunnelled-over-SSH everyday (remote helpdesk connections to Windows and Linux stations) without issues. Even when the remote desktop has dual-monitors configured, although that can take a bit of horizontal and/or vertical scrolling of the local VNC client window.

Tunnelling over SSH is not slow. Especially if you enable the NONE cipher on boths ends. :) Not recommended if you are going over public Internet links, but can work wonders within an organisation.

Tunnelling X11 over SSH can be slow, but can also be made fast using NX. Downside to that is that it's full-desktop remoting, not per-application remoting. But it works wonders for staff and students to access their Linux accounts at the schools from home (even across ADSL links).

We can even watch 480p youtube videos in Firefox via VNC-over-SSH across ADSL links (it's choppy but watchable). E10 (10 Mbps) sites make it no different than watching locally (with the exception of the complete lack of sound). Using NX makes even the ADSL link enjoyable. And that's all done over SSH (without compression enabled in the SSH client/server).

IOW, tunnelling over SSH is not slow. Whatever app/protocol you are tunnelling will determine the "speed" of the remote app.

Comment: Re:logic (Score 1) 202

by phoenix_rizzen (#46569251) Attached to: KDE and Canonical Developers Disagree Over Display Server

The reasons for introducing mir are performance, ability to run on low footprint devices, and cross device compatability.

Jolla would like to know why the need for Mir when they have a Wayland compositor and window manager running on low-end/mid-range mobile devices with excellent (compared to other similar-spec devices) performance.

Comment: Re:So it seemed simple at first... (Score 1) 358

by phoenix_rizzen (#46486891) Attached to: EU Votes For Universal Phone Charger

I'm hoping they move away from the "tiny post with pins sticking up inside the slot" setup that USB of all stripes uses, and toward a "the pins are on the outside of the slot".

There's nothing worse than having that tiny post inside the micro-USB slot break off.

Look at the 3.5 mm headphone jack for inspiration. Look at the Lightning connector for inspiration. Hell, look at the old mini Christmas light bulbs for inspiration. Make the end plug solid, and connect to pins/connectors around the slot that it plugs into. Nothing to break off inside. Nothing to bend.

Comment: Re:So it seemed simple at first... (Score 1) 358

by phoenix_rizzen (#46486847) Attached to: EU Votes For Universal Phone Charger

The USB3 micro plug, as seen on some Samsung phablets, is a micro-USB2 plug + an extra plug. So, you can either connect a micro-USB2 cable and get USB2 speeds, or you can connect a micro-USB3 cable and get USB3 speeds.

However, it's a HUGE connector, almost twice as wide as a micro-USB2 connector.

I believe the Note 3 uses it.

Comment: Re:Dumb (Score 4, Informative) 358

by phoenix_rizzen (#46486785) Attached to: EU Votes For Universal Phone Charger

The EU mandated microUSB charging ports on phones, thus reducing the "cable clutter" that existed 5-odd years ago.

Now, the EU is mandading the other end of the charging cable, the actual, physical charger is plugs into. Meaning, you'll only need a single charger, with a USB port in it, to charge your flip phone, your 4" mini-smartphone, your 6" phablet, and your 10" tablet.

Right now, each device has it's own charger, with it's own specs (how many volts at how many amps). And you generally can't charge a tablet using an older phone charger.

So you end up with a handful of different chargers in your drawer that you have to pick through to charge each device, or you end up with a drawer full of chargers you never use as you just plug everything into the most power charger you have (generally the one for the tablet).

Standardising on a single charger would eliminate all the extra chargers gathering dust in people's junk drawers.

Comment: Re:Tried playing this game (Score 1) 218

by phoenix_rizzen (#46144715) Attached to: Celebrating Dungeons & Dragons' 40th Anniversary

I always preferred Role Master for this reason. Everything was based on percentages and tables. You only needed 2 dice (D10). And everything else was left up to the imagination. There were enough rules to keep everyone in line without getting bogged down in minutia.

Of course, the best Game Masters didn't both with 90% of the "rules" and looked at the books more as "guidelines" to keep the action going. The more talking, role-playing, and action, the better the session. If you spent most of your time trying to figure out "how do I ..." in a stack of books, you were missing the point.

Comment: Re:Nvidia has NOTHING to lose at this stage (Score 1) 66

by phoenix_rizzen (#46141465) Attached to: NVIDIA Open-Sources Tegra K1 Graphics Support

ARMv8 supports both AArch32 (32-bit ISA) and AArch64 (64-bit ISA), similar to how AMD (and now Intel) CPUs support both x86 and amd64 ISAs.

Meaning, you can run a 32-bit OS on a 64-bit chip, and get access to all the improvements to the architecture, and it will run like a faster 32-bit chip.

Or, you can run a 64-bit OS on the 64-bit chip, and still run 32-bit apps, and get access to all the improvements to the architecture, and it will run like a 32-bit chip with access to a full 64-bit address space (for the OS, the apps are still limited to 4 GB each).

Or, you can run a 64-bit OS on the 64-bit chip and run 64-bit apps and get access to all the improvements to the architecture, including access to the full 64-bit address space within each app.

Or, you can mix and match the last two as needed. Which is what Apple is doing with their A7 SoC (64-bit CPU, 64-bit OS, mix of 32-bit and 64-bit apps).

There's a lot more to the ARMv8 architecture than just 64-bit-ness. There's a lot more memory bandwidth, there's a lot more registers, there's a lot of clean-up to the ISA, etc, etc, etc.

You don't need more than 4 GB of RAM to get improvements from running a 64-bit SoC. Just like you don't need 4 GB of RAM on the desktop to get improvements from running an AMD CPU in 64-bit mode with a 64-bit OS.

Live within your income, even if you have to borrow to do so. -- Josh Billings

Working...