Forgot your password?

Comment: "4:3" vs "4x3" (Score 1) 315

by DrYak (#48441359) Attached to: Eizo Debuts Monitor With 1:1 Aspect Ratio

It's not 4:3, it's 1:1

Yes. And he was saying "4x3". As in "put 12 display in an array. 3 row of 4 screens each."
You end up with a giant wall, with 4:3 aspect ratio (as each tile is square).

Then you buy 132 more displays, arrange them in 16 columns of 9 (16x9) and you can cover a building's facade with your very own 16:9 tiled jumbo diplay in LD ("ludicrous definition") and create an open-air cinema with your neightbours

But, as he mentionned, driving 144 display tiles in total is going to be a little bit complicated.
(5 display max per Radeon card. 4 Radeon cards per motherboard. 20 displays per PC Tower. You could probably driver 2 tiles per display port using splitters like matrox is down, so you need 1 PC tower per 40 tiles. So at least 4 bit PC towers to drive all this).

But totally worth so you and your neighbours can together brag about being the first "Ludicruous Definition" cinema of the city (256x the resolution of Ultra HD).

Comment: Theory (Score 2) 140

You would effectively starve to death within a year of symptoms showing up, regardless of how much you ate. (IIRC, actual starvation could prevent/slow the progress in some way)

Well from a purely theoretical point of view:
it could be possible to survive on a low-carb diet, eating only proteins and fats and avoiding sugar completely.
Basically, eating only steak and salad, never bread.
(The kind of diet that bodybuilders use).

In that situation the body obtains most of its energy by burning fat and maintains blood sugar levels by gluconeogenesis.
(This metabolic regime consumes some proteins, hence the increase need of meat to avoid starvation).

But it's complicated to get correctly.
Compensating the Type 1's lack of insulin is much simpler.

That's what some think early human diet looked like before agriculture (the theory basis behind the paleo diet).
That's also used by body builders to burn fat (as mentionned above).
Before insulin that was the only way to keep Type 1 diabetics alive.
It was also recently been mentionned as a insuline-free alternative treatment. Was mentionned on /. recently.

Comment: Instagram (Score 1) 206

by DrYak (#48338083) Attached to: Zuckerberg: Most of Facebook Will Be Video Within Five Years

I think it's not so much that no one cares as that decent video calls require more infrastructure than a phone. The camera needs to be steady, lighting needs to be good, sound isolation needs to be good... all in all, video calls work much better from a laptop sitting on a desk in an office, or better yet in a conference room with dedicated video-conferencing equipment.

And some goes for most other forms of video.
Making a decent video clip instead of just quickly recording something with a camera phone, is difficult.
Much more than putting some effort into a photo.

Until some startup finds a way to do the video equivalent of Instagram (i.e.: making it easy to create nice video clips) video won't be a major communication medium.

Comment: Welcome to SIGINT (Score 2) 122

by DrYak (#48337655) Attached to: New NXP SoC Gives Android Its Apple Pay

If you think that some software sandboxing is the equivalent of a "secure enclave" chip in terms of secure-ness, you're sadly mistaken.

If you think that a "secure enclave" is really secure, when its implemented as a SEPARATE CORE ON THE SAME FUCKING SILICON, you really don't believe in SIGINT.
In a world where scientist have been able to guess GPG private key just by analysing signal.
Accoustic signals: Noise.
Over a smartphone's crappy mic.
Do you really think that a "secure" core on the same piece of silicon stands any chance?

Comment: Equivalence (Score 4, Informative) 122

by DrYak (#48337615) Attached to: New NXP SoC Gives Android Its Apple Pay

Functionnally: They are equivalent.
- In both case, it's a payment system, and supports NFC protocol so that you can pay wirelessly just buy putting the phone next to the payment machine.

Hardware-wise: They are not exactly the same.
- Google Wallet is just a generic payment system (like PayPal, etc.) In most phone, it's simply the OS (Android) being able to talk over NFC to the payment machine. It's up to the OS and Application to hangle security any way they choose (might or might not involve hardware - most implementation do not. But some smartphone did have some form of it).
- Apple's system specifically uses a separate piece of hardware: a TPM-like chip that is secured and hardened and holds the actual banking information (which never leaves the chip). Security is by definition handled by the specific chip.The whole systems works like a wireless credit-card with a smartphone bolted next to it, the smartphone being able to act as a GUI to the credit card, but the card handling the transaction themselves.
Some Android Smartphone did in fact work exactly like that. (Had a dedicated chip which was more or less a micro credit card, which handled the NFC talk it self and the smartphone merely interfacing with the card).
- NXP is a vendor of chip that makes hardware components for payment. They've worked on Apple's chip. They are now selling this chip for android smartphone manufacturers too.

Apple's emphasis is on security: They want their "dedicated non-hackable credit-card-on-a-chip" approach.
Google's emphasis is on making the technology available everywhere. High end phone will have a chip, low-end phone will simply emulate a virtual credit card by having a piece of software talk over NFC. But it's going to be available as widely as possible.

From a security point of view:
Google's idea isn't the most secure ever: it rellies on the OS being good at correctly isolating and sandboxing apps. But bugs happen.
Apple's idea isn't perfect either. In theory, a separate piece of hardware is easier to make tamper proof. In practice, it's just a subpart of the same piece of silicon as the rest of the system (they are SoC. System-on-chip. Nearly the whole modern smartphone is a single chip) hacker are bound to find a way to leak sensitive data (I mean, for fuck's sake: hackers have been able to deduce GPG private key by reading signals leaking out of a compute. Noise. Captured by a smartphone's mic. If they can steal your crypto just by listening caps singing over a crappy mic, do you really think that a core on the same piece of silicon is isolated enough ?!)

Comment: Will they ? (Score 1) 64

by DrYak (#48221193) Attached to: Tracking a Bitcoin Thief

So what? Since there's no central authority to block transactions or seize funds they'll simply be passed around until any relation with the crime is meaningless with almost everybody in the transaction chain is blissfully unaware that somewhere they were stolen.

Will they pass them around? Enough to blur any relation ship? In a secure way that never leaks any identity?
(oops, one of the exchange I sent money to managed to record my IP address. No matter how much I keep mixing downstream, part of identity are leaked here)

Remember that they have adversaries like government who (as recently proven for the NSA, for example) have quite a few ressources.
A single policeman might not be able to pull enough data and analysis.
But if goverment suspects that some big danger as possible ("pedo-terrorist pirates!" threat, or more realistically: juicy corporate spying opportunities :-P ) and decides to throw ressources at it, tracking might be achievable.

It's not impossible for the thief to manage to get out un-identified. But it requires being particuliarly smart.

Imagine if cash was that way, every time the grocery store tried to despoit money at the bank the bank would say "oh no, this and that bill came from a gas station robbery two years ago so we'll return it to the gas station and deduct it from your deposit.

Cash *does* function this way (a bit): bills have serial numbers. Of the grocery stores deposits a bill with a known serial number on it, police might show up the next day asking for the CCTV suveraillance tapes, because that serial number happens to be a bill passed through the hands of known drug kingpin/terrorist/pedophily ring leader/etc. do it enough with enough of such incidents, and you might get a vague idea of the identity of the people you're looking for.
Unless the criminals have been absolutely perfect in their laundering and have managed to never leak any info (i.e.: by the time the known bill are flagged, they're in the hand of complete random strangers).

Google for "Ransom bill reappear" type of news reports.

Comment: Mass analysis (Score 1) 64

by DrYak (#48220901) Attached to: Tracking a Bitcoin Thief

1 single transaction tracked ? Yes, you mostly get just 1 other bitcoin wallet.

Massively track thousands of such transaction? (that's beyond the capabilities of a small budget research team. But that's well within the capabilities of any decent government) And correlate them with "end-point transaction" (transaction that can be traced to a real-world identity: buying something from an e-shop using bitcoins and ordering it delivered to an address) ?
then, if the tracked person isn't using an insanely high number of "tumbler/mixers" (i.e.: laundering) or moving it in-and-out of tons of exchanges (basically also a form of mixing), you might find some correlation:
aka "a significant number of these BTC have transited to these wallets all mapped to the same real-world address/person"
that is not enough to warrant an arrest, but that is enough to put these real-world persons with the shortest "path" to the tracked transaction on a suspects list for further investigation by classical police work.

(Saddly, often government don't have such concepts of "suspect list". Very often such unsure statistical result won't be used as a "hunch" but will get you put on the "no fly list" and such)

That's why bitcoin protocol is considered "pseudonymous" and not "anonymous".
That's also why we need to have:
- law against data-collection abuses (because someone brilliant in the NSA/CIA/etc. will definitely try to jail people on this base or at least put them on a "pedo watch list" without much tinking)
- better way to do anonymous transactions (optionnal tumblers/mixers for BTC, or alternate protocols that include provision for anonymity)

Comment: Workforce vs. number served (Score 5, Insightful) 720

by DrYak (#48220641) Attached to: Automation Coming To Restaurants, But Not Because of Minimum Wage Hikes

Currently, the way it's implemented in european country, McD doesn't use it to reduce workforce (you're still required to walk up to a clerk to retrieve your order).
McD uses it to accelerate it service and increase the "number served": by the time you finish typing your order and have confirmed, the order is already broadcast to employee's screen. By the time you finish paying and walk to the queue, your order is already ready.
This cuts drastically the waiting time, and european McD's use to cram more customer served per minutes.

In the long run such stategies won't neceessarily reduce the workforce that much, but on the other hand, they will be used to propel "fast food" to a whole new definition of "fast".
On the other hand, that will probably be quite alienating for the workforce: no more breaks between customers, no more small talk while ordering. Work experience is going to be Charlie Chaplin's "modern times"-style: read the screen, pack the bag, hand over the bag, as fast as possible and repeat so the next customer doesn't need to wait.

Comment: Good / Bad Idea (Score 1) 287

by DrYak (#48211237) Attached to: Will the Google Car Turn Out To Be the Apple Newton of Automobiles?

That's an idea which could be useful in theory.
(e.g.: Cars with drivers will still be able to display warning about red lights, speed limits, etc. based on the info broadcast by trafic signs)

But it has a few problems:

- The implementation will probably be botched. Expect the thing not being properly signed/authenticated, thus enabling malicious hackers to spoof information. (Similar to how hackers hijacked RDS-TMC and broadcast "bison crossing" in Germany a few year back on /. )

- Such system lacks a fail-safe option. A human might notice that a trafic light is off and will fall back to other driving behaviours. A robots might not realise that there is no emitting signal. (The robot can't see a missing emitter unlike a human who can notice a broken traffic light even without any light colour coming off). In some case it might be okay (missing traffic light: drivers are supposed to fall-back to priority-yield, which is probably the default behaviour of a robot when arriving at a crossing without signs), but it might be problematic in other case (a "danger ahead" sign with a broken emitter).

- Car insurance companies are going to abuse the shit out of this (cue in mandatory dongles that spy if you obey trafic signs. Of course driving dangerously and ignoring signs is bad. But violating privacy is bad too) At least european countries are a bit stricter regarding privacy.

Comment: The way bank do it (Score 3, Informative) 121

by DrYak (#48197153) Attached to: Google Adds USB Security Keys To 2-Factor Authentication Options

The way some bank do it, is that the authification asker (a 2F-protected service provider) sends a signed/encrypted message, that the security token decodes/verifies/displays. That message can't be tampered with (cryptography).

So the token will display the message (something like "Authentication required to access").
so if an attacker tries to intercept your credential by opening an actual google page in the background, you'll notice that what the thing pretends to be on screen and what the dongle register as an asker aren't the same.

The way to fool the user would be to try to look actually like the page you're trying to spoof. So an attacker needs to look like GMail, so the user thinks he's on Gmail, whereas actually it's a malware page maskarading as it and relying security tokens from the real Gmail.

Now the way that banks counter-act that, is that any critical action (payment, etc.) needs to be confirmed again by the security token system. So the theoretic man-in-the-middle can't inject payment for 10'000$ for his Cayman Islands account. Because every payment needs to be confirmed again. And the bank will issue confirmation message regarding transaction.
You'll notice if when paying a phone bill, the confirmation message instead is 10'000$ for Cayman Islands.

Overall, it works as if the security token is its very own separate device, designed to work over non-reliable non-trusty channel.

(The device doesn't implement a full TCP/IP stack. Most example device accepts only:
- a string of caracters as an input (i.e.: you need to type the last five digit of the account you need to send funds too. The bank will notice when you type the digit of your utility company, but the man-in-the-middle has tried to inject a cayman island account from your browser).
- a 2D flashing barcode to automate string input.
- for the most crazy solution: writing a string to file on a flash-disk, this flashdisk is shared with the security token's microcontroller.
Each time, the attack surface is very small. Only a short string of data is passed. You can't get much exploitable bugs.

For the output, only a string again:
- that you read and type from the token's screen.
- that the token can type on your behalf, communicating with a HID chip on the same device.
- the token can send it to a flash device that makes it visible inside a file.
Again, the security token it self is limited to send just a string. Very small attack surface. All the funny "stuff" are implemented outside, and thus very low risk of remote exploitability)

Comment: Again fixed pipe (Score 1) 55

by DrYak (#48196197) Attached to: Direct3D 9.0 Support On Track For Linux's Gallium3D Drivers

Again, there's a reason why Glide wrapper tend to target OpenGL 1/2 instead of 3/4.

Glide is fixed pipe.
Glide and the other APIs back then (DirectX 7, OpenGL 1/2, etc.) where about just painting plain triangles. Paint triangle with tips at vertex v1,v2,v3 using texture T1, optionally a second texture T2 as lightmap (and for the few architecture that did have it: using a third texture T3 as a bump map).
That's it.
For any pixel on the screen, the only thing the hardware is capable of is geting 2 or 3 textures (interpolating them and mipmaping them), and combine these 3 texture in a hardware specific and fixed way.

Modern APIs (OpenGL 3/4, DirectX 9, and 10/11, Mantle) are all about programmable shader. For any pixel on the screen, you run a small program (a kernel in mathematics) which can do pretty much anything you want. You can ask the hardware to draw pretty much anything you want. You could even ask the hardware to draw a mandelbrot set (I've done that).
Your modern API relies on a back-end that export the functionality of these general-purpose highly parallel processor that are GPU (Gallium3D is exactly such a back-end. DirectX 11, Mantle, and OpenGL Next are API that promise to stay as close as possible to this low level) (and OpenCL is a way to make this available for other kind of general purpose computing). On top of it, it has a high level API that still works in a highly customisable way: you write shaders that will combine several texture in the way the artist would need (including effects like occlusion mapping, translucent and sub-surface scattering, etc.) and the API converts these mid- high-level shaders and texture accesses, into lower level kernels and memory access to generate whatever is needed on the screen, no matter how complex the maths behind are. (remember: a Mandelbrot set is perfectly doable, even if completely useless).

That's also why DirectX state tracker makes a bit sense: DirectX is supposed to be a little bit less high-level on the abstraction scale than OpenGL. It's better to DirectX-to-Gallium3D (would be like translating C into assembler as a regular compiler), rather than DirectX-to-OpenGL (would be like translating C into Python).

Glide on Gallium3D, would mean rewrite a complete fixed pipeline. Expressing all the classical "texture and lightmap" combination which back then were handled by hardware, and writing modern shaders that re-implements them. Well, guess what? Drawing polygons with a fixed pipe-line is already what OpenGL 1/2 does inside Mesa on Gallium.
Instead of rewriting the same stuff twice and risking to introduce twice as many bugs, simply use a Glide2GL wrapper. Glide and OpenGL are very closely related anyway.

Comment: Glide = Fixed pipe (Score 1) 55

by DrYak (#48176051) Attached to: Direct3D 9.0 Support On Track For Linux's Gallium3D Drivers

It would be nice if support for Glide 2.1 and 3.0 be added also, there is a good chunk of oldies that would benefit and nowadays wine has dosbox built in, so even DOS games would be supported.

Very unlikely in my opinion:
Voodoo cards (and their Glide API) are fixed pipeline.
Whereas, from the ground up gallium3D was organised around the modern features found in a programmable-shader card.
There's a lot of difference between how these work.

On the other hand, Glide was designed with the simplest subset of OpenGL implementable in hardware in mind. That's why it easy to write miniGL or OpenGL implementations on top of it (and the reverse also: it's not impossible to write Glide-to-OpenGL wrappers).
Meaning that, in theory, it could be possible to build a Glide state tracker out of the building block that Gallium3D back-ends expose to the Mesa OpenGL tracker.

Comment: Small percentage (Score 4, Informative) 55

by DrYak (#48176023) Attached to: Direct3D 9.0 Support On Track For Linux's Gallium3D Drivers

This support in mesa will allow these games to be ported more easily, rather than forcing a rewrite in a major portion of any game engine, the display layer.

This won't help much for porting. It only works for drivers that work on Gallium3D. Thus, it only works on Radeon and Nouveau (and the alternative Gallium3D powered ILO. The official Intel runs on classic Mesa).
So only a very few end users will be affected. It's not worth counting on Gallium Nine for the port, as you're missing the big part of users who instead run the proprietary and/or official drivers (specially since Nvidia's blob has way much better hardware support that the reverse engineered Nouveau - due to lack of documentation).

On the other hand, Gallium3D give a nice and faster route for Wine, so a few select users can get straigh Direct3D support instead of going through a transaltion layer. So it's a relative benefit for Wine itself.

The developer can even choose to go the wine route, and simply provide a wrapper for their product, such as Star Trek Online uses with thier Mac port.

That has technically been possible before the Gallium Nine driver, anyway. The presence or absence of this driver don't change the feasibility of such ports. It only makes them faster for a few select users by removing translation layers.

This may be hugely important for the Steam Box initiative.

Well, depends. I doubt that, when it comes out, it will rely on opensource drivers. At least not for Nvidia hardware: the difference of stability and hardware support isn't worth the effort.

On the other hand, if AMD get their shit together in time, and release the hybrid closed/source driver as promised (i.e.: you run the opensource kernel driver "amdgpu". Then, as an OpenGL implementation, you're free to use either the opensource Mesa Gallium3D driver or the Catalyst driver which will only be a GL+CL library running on top of the exact same opensource base), you might see the possibility of AMD Steamboxes that let the user switch between the two GL implementation on the go. That could mean using opensource GL/CL for the interface and for a few select game that need DirectX, and switching to Catalyst GL/CL for games that need GL 4.x, with Steam maintaining a database of which version runs better for which game and handling the switching without need of user intervention.

Over all, Direct3D is a much simpler and lower level API (at some point of time it was considered to be a back-end to be targeted by openGL drivers) so it would be supported faster than openGL and would give definitely a performance boost.

Also, specially if AMD releases Mantle for Linux (or if it becomes "OpenGL Next"), that might attract the interests of some multi-platform developers: such AMD powered Steamboxes would be closer to the hardware found in other consoles (AMD APU or GPU in all other consoles of this generation) and might help PC ports (at least on AMD it might get optimised a bit thank to re-using the work done on consoles).

Comment: Systemd uses (Score 3, Insightful) 303

by DrYak (#48103359) Attached to: What's Been the Best Linux Distro of 2014?

Few random exemple where systemd helps:

- if you look at it probably 99% of all service on linux are just about starting an executable, with a few parameters.
-- with systemd, you do exactly that: write a service file that gives the name of the executable to run. and that's it. done. much more easy to maintain
-- with sysvinit, each distro has it's own local variant of boiler code that need to be copy-pasted around, and each service needs a whole script in /etc/init.d.
Whole script with duplicated lines vs simple text file.

- become a daemon requires some work.
-- either the developper must do a whole dance inside the code (double fork, sanitizing environment like closing descriptors, etc.)
-- or you need to take care of it from the outside (startproc, etc.)
systemd (like also daemontools and several other such "successors of sysvinit") can automatically take care of that. just run the soft in immediate mode, systemd takes care of the daemonisation/sanitization. In fact you can easily run as a service things like scripts.
So you want to have a daemon that is basically just a gawk 1 liner ? feel free.

automatic handling of modern kernel features. Cgroups, brokering capabilities, etc. Classical sysvinit has no concept of these (of course, they didn't exist back then).
- You would need either more kludge in you init.d scripts
- or use a modern system that can take care of that. systemd is one of them.

very light-weight container creation: other parts of systemd take care of state-less systems (basically you only need /usr for a system to work, /etc and /var can be automatically rebuilt with default settings from /usr if they are empty), various daemons under the systemd project can take care of the basic initialisation step (you don't need a full fledged dhcp server and client/pair compatible with every possible corner situation and supporting every option under the sun when all you need is just quickly hand out an IP to a LXC container - similarily to how one would use dnsmasq, systemd has its own micro dhcp implementation).
that makes possible to use LXC-style container (and thus much higher level of isolation) for anything that you don't trust and would like to run in its own container.
You don't trust skype, specially since microsoft did take it over? LXC container combined with SELinux and AppArmor (which LXC supports) would be a way to isolate it. Systemd (not the pid1 daemon, the whole project) is a project that can help generating such containers on the fly without any administrative intervention nor any configuration required.

You might not need these. And you're free to stick to old sysvinit if you want. Or at least move to a more modern spiritual successor of this (openrc)
(Gentoo give you choice of system. Or you could gather people and start "Rubuntu, an openRC spin of Ubuntu")

Or you might want these features. And systemd is then a nice single stop for all this plus more. (Though you could find similar daemon giving similar functions spread over 20 different projects).

It's a bit like the situation with TeX (nice single stop to get a ton of filters for text processing and typesetting) Ghostscript (printing) Pnmtools or ImageMagick (single suite of tighly integrated image filters/processing), etc.
Systemd is a similar suite containing all the necessary building blocks for taking care of system initialisation/process starting, etc.

Systemd has tons of useful funtionality, and thus lots of distribution decided to pick that one up as an openrc successor.
(Including distributions not depending on gnome)

It was kinda like stuffing the wrong card in a computer, when you're stickin' those artificial stimulants in your arm. -- Dion, noted computer scientist