Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Remember M$'s role on SCO? (Score 1) 192

They've repeatedly abused their monopoly power, e.g. by bundling IE and giving it away free, with the express purpose to "cut off Netscape's air supply" so that they project their preexisting monopoly power in the desktop space to the nascent Internet space. I don't care what you consider evil or not, but I'm glad MS ended up not dominating the Internet, and the successful conviction, and the resulting PR fallout may have had an impact on that. If you read up on the case, you may learn that being a monopoly is not in itself illegal; wanting to grab even more power is not illegal; however, abusing monopoly power in doing so is illegal. But as you say below, you're currently a merry part of the MS ecosystem so I understand the motive to rationalize and trivialize the fine line between competition and abuse by a 800 pound gorilla.

Comment Re:Remember M$'s role on SCO? (Score 5, Insightful) 192

It's not "meh, both are evil". Microsoft had been evil long before Google even existed. They didn't just 'not let other large competitors index their stuff'. They went after minuscule companies like Netscape, and emailed among each other how they'd cut off Netscape's air supply, which they did, and as a result, had no real competitor in the browser space for about a decade - even though they copied Netscape's product to begin with. They were charged with abusive monopoly behavior and they were convicted.

I'm always appalled when a guy comes and lazily simplifies corporate wrongdoing saying one is just as evil as the other. No, there are differences. It almost never happens that two entities are equally this or that. Google would have to rule the Internet for like a hundred years to accumulate as much sin as Microsoft had, and Microsoft would need to exlst for a millennium to bring about as much technological progress as Google has already created. Microsoft is the new Computer Associates and the new SCO, all in one.

Comment Two years? (Score 1) 142

> Google has spent two years working with the Real Change organization to develop a barcode-scanning app which lets passers-by purchase a digital edition with their mobile phones

Srsly? Put it in the Guinness Book for the longest tech spike ever. Had Elon Musk adopted this ambitious pace, he'd still use a slingshot.

Comment Re:You probably could tell looking close up (Score 4, Insightful) 152

I like resolution independence. So the long term future is bright, however that long term future is bound to include screens that have print quality resolition (600-2400dpi) and beyond. But during transitional times, it's nice for compatibility if applications etc. that were designed for lower resolution screens look OK.

For example, there's no way you can upscale a 640x480 screen to 1024x768 such a way that it looks as good as it would look with 640x480, and you don't want to leave black bars either. These are pretty low resolutions and the artifacts of rescaling are very obvious.

However there's decent content made with things like 720p, FHD and slightly higher resolutions that may suffer from similar rescaling artifacts if shown on a higher pixel count screen which is "just retina enough".

This is one reason I support resolutions beyond what's obviously discernable with the naked eye, e.g. into the print press resolution territory.

Another reason is that while visual acuity starts to fail to discern pixels at "retina" resolutions (around 300dpi), it's more of an entry point. Art, business and scientific visualisations, electronic press publications do benefit from higher resolutions. Not to mention that a lot of us have better eyesight than 20/20 and not everybody keep their devices at the 12" distance all the time.

Additional reasons:
- mobile devices "prime" the economically viable display technology for VR and AR stuff - way higher resolution there is essential
- larger screens tend to eventually follow DPI standards set by mobile, e.g. the iMac 5k is almost at the iPhone 4 level, in terms of DPI
- very high resolutions force the developer community to finally steer away from obsolete units and concepts like the pixel (useful at low level HW etc. only)

The increase to resolution is milking a known process in electronics, called scaling or miniaturisation.

Having said all these, the new areas of improvement should be here:
- viewing angle and independence of color from angle
- color gamut
- contrast, white level, black level
- calibration
- latency and blur
- integration (scanner, fingerprint reader, camera, touch, physical objects)
- plasticity and cost (eventually replace paper)
- directional projection for individual eyes with no 3D glasses (2 streams, or ideally, shared viewing)

So it's a long way before we can render a surface like the radiance of a butterfly wing...

Comment Re:You probably could tell looking close up (Score 1) 152

What does it even mean, you were one of the biggest advocates? Was there some ranking of the global population and you were in the top 100, or something? Or maybe you have a website where you promoted this idea long before it was foreseen, or wrote regularly to mobile manufacturers, or were involved in R&D? Just curious.

Also because I'd expect a big advocate to know about font size being a different thing to pixel size. Even casual and moderate advocates of hi res screens knew about it and informed the general population that it's okay to have high resolution, your fonts, etc. can be scalable!

One thing a 4k mobile screen is good for: upscale the visual elements from designs intended for lower resolution screen. If the resolution is "just enough" for your eyes for optimised content to not see individual pixels, then upscaling a somewhat smaller view may result in visual artifacts; it is less likely if the physical resolution is "ridiculously high".

Comment Re:masdf (Score 2) 297

Lion tamer? Sounds like a realistic next goal for someone like this ticking bomb guy. Once someone clearly identifies with the goal of killing hundreds of people, it's more economical to test his intentions (which they did) and carve the rotten flesh out of society's body, versus resorting to asking him "enjoy the World, how do you feel about your mother, and hey here are some pills, if all fails, they'll change your mind - because you want this, right? - , pretty please never ever skip them" or following his every move for the rest of his life at a huge expense to society, which is never an airtight process anyway. We can moralise a lot from the vantage point of First World sophistication, but someone infected by medieval concepts will only exploit this as cracks in the society's self defense mechanism. If they go medieval on us, we need to take appropriate action, in pragmatic ways, to defend ourselves, rather than being immersed in navel gazing, rationalising murderous behavior etc. If he was ready to set off the bomb, of course let's remove him from society. I think most vegetarians would resort to eating meet if the alternative is starvation; most animal lovers don't have a problem squishing the odd moskito or wasp if they are bitten. The humanistic person like you also need to deal with forces whose very intention is to end humanistic society. Sure, learn about stuff, analyse, rationalise, give planet size benefits of the doubt, and contemplate how we should acquit criminals if it was proven they had a bad hair day or somebody looked at them with contempt. But in the meantime defend yourself from clearly demonstrated dangers, otherwise you might find yourself under Sharia law and you can forget about open discourse.

The FBI presented the guy with an 'opportunity' the same way some jihadist cell could have done, having identified him as a willing contributor. What's the difference? What if he were approached by a real terrorist cell, did the same thing, and, e.g. for some technical reason, the bomb didn't go off?

I insist that the FBI take such preventive measures, rather than just resorting to mopping up the blood, identifying bodies, and learning what happened and why, after the event - which are pretty clear to begin with, e.g. they were jihadists, or lone wolf criminals, period.

Comment Re:masdf (Score 2) 297

>If someone openly stated they want to become a martyr and hurt or kill a lot of people, they are mentally ill, whether they intend to carry it out or not. That's not open to debate.

No, you're only proving that you're incapable of properly reasoning in the real World. You hold some values and convictions and your spotty thought processes lead you to assume that all other people do, even those that grew up in radically different environments, maybe in a setting resembling the medieval ages, or maybe born to parents who themselves grew up in such an environment, and now they, and their children have difficulty adjusting to the First World. You are apparently incapable to understand that there are lots of religions and lots of calls to arm, especially in the Muslim world. Therefore you don't model a jihadist properly; you model yourself, or those who grew up near you, with the slight superficial change of having darker skin, calling God in a different name and maybe speaking an additional language.

Because naive (I was polite here) people like you represent danger to society through improperly resisting defense mechanisms of society, maybe society would benefit more from your visits to a psychologist than from the visits of a jihadist in the making:

You might learn not to project your current mind into the bodies of other people. You might learn that someone plotting something unreasonable _to you_ may not be mentally ill. However the jihadist won't say, "Yes, sure, after these 10 therapy sessions now I see the World is a beautiful place and I want to constructively contribute to society.".

Comment Re: Simple answer ... (Score 1) 315

What is it about the computational medium that make people perceive it as antisocial? I think it's because the screen size, and that it's a small rectangular window into another world. But nothing in computing is inherently antisocial; discounting 'social networks' and 'groupware', there is great potential in socially shared computing once the sharing can be done in a physical environment, rather than just through the little rectangular windows.

Imagine for example large surfaces, e.g. the interior or exterior walls of a building, or pavement surfaces, when they act as both displays and sensors (camera, pressure etc.). So many interesting and creative potential applications that are socially engaging and inclusive.

Comment Re:Scratch (Score 1) 315

So what's the next step then? It's like the razor ads, with more and more blades.

So let's introduce the next stepping stone: Suxorz!

Suxorz has an even more incredibly large number of puzzle shapes, and even more colors! Of course they have to be smaller, so the text on them is unreadable and targeting it with the mouse teaches you some skill already! You get everything that you get in Scratch! and Snap! plus:

- Java-like interfaces!
- XML builder!
- an MVC framework that forces manual event handling with string based fake-namespace notifications for race conditions!
- you can convert the program to COBOL and ABAP/4!
- inheritance! - of course, just single inheritance, to force you to create 'visitor' patterns instead of just using multiple argument dispatch (CLOS)
- C++ style template programming!

As a side effect, it suppresses all other programming paradigms, so visit your local museum to experience stuffed specimens of Lisp, Logo, Forth, Scheme, Matlab, R, Haskell etc. and anything declarative!

Progress!

Comment Re:Don't (Score 1) 315

So frequent unwarranted recommendations for Scratch. It has a pretty crude UI, and it looks unappealing. It feels like adults' conception of what kid programming might be like. Kids are not stupid, they're kids, we should of course simplify and make it approachable, but 'for kids' doesn't mean it should be shit. There's all the deserved dismissal of GOTO, but Scratch and literally EVERYTHING except Logo teaches imperative programming. When people coded in Basic, they didn't think about the problems with a language that didn't encourage structured programming. Kids will look back to this period and will say apologetically, "imperative was the only thing available, the default choice, of course I got stuck in this incredibly mechanistic, low level mindset".

"It is practically impossible to teach good programming to students that have had a prior exposure to IMPERATIVE PROGRAMMING: as potential programmers they are mentally mutilated beyond hope of regeneration."

not Edsger Dijkstra

With interesting data flow tooling for e.g. data analysis (RapidMiner, Knime) and even shader programming (http://bit.ly/1JwMJED), and Bret Victor's vision, not to mention his kid-friendly approach to functional programming (http://worrydream.com/AlligatorEggs/) and visual editing prototypes, we should certainly do better than the n+1th rehash of some frigging 'while' loop into which some other imperative instructions can be snapped like puzzles. Everyone and their dog just rehashes this pattern over and over, mindlessly.

Not to mention that Scratch requires literacy, and literacy in English, for no good reason. Ah and its ergonomics is pretty bad, small fonts, incredibly small interacting surfaces that require a lot of accuracy. We have apparently just no idea how to use this novel interactive medium called 'computing' - the best we can suggest for kids is '59 vintage BASIC without GOTO wrapped in colored puzzle shapes to make it seem 'intuitive'.

The first person who invents a proper content creation tool for kids will have probably revolutionalised computing. There are so many people with artistic talent or creative inspiration who just don't get near the computing medium due to its incredibly arcane, fragmented and brittle hodge-podge of a pile of mess.

Slashdot Top Deals

The use of money is all the advantage there is to having money. -- B. Franklin

Working...