Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Employees In Swedish Office Complex Volunteer For RFID Implants For Access 168

Lucas123 writes A Swedish office building is enabling corporate tenants to implant RFID chips into employee's hands in order to gain access through security doors and use services such as photocopiers. The employees working at Epicenter, a 15,000-square-foot building in Stockholm, can even pay for lunch with a swipe of their hand. Hannes Sjöblad, founder of Bionyfiken, a Swedish association of Biohackers, said Epicenter is not alone in a movement to experiment with uses for implanted chips that use RFID/NFC technology. There are also several other offices, companies, gyms and education institutions in Stockholm where people access the facilities with implanted chips. Bionyfiken just began a nationwide study using volunteers implanted with RFID/NFC. "It's a small, but indeed fast-growing, fraction which has chosen to try it out." The goal of the Bionyfiken project is to create a user community of at least 100 people with RFID implants who experiment with and help develop possible uses. But, not everyone is convinced it's a good idea.

John Kindervag, a principal security and privacy analyst at Forrester Research, said RFID/NFC chip implants are simply "scary" and pose a major threat to privacy and security. The fact that the NFC can't be shielded like a fob or chip in a credit card can with a sleeve means it can be activated without the user's knowledge, and information can be accessed. "I think it's pretty scary that people would want to do that [implant chips]," Kindervag said.

Comment Re:open to whom? (Score 1) 296

note: the use of less-than and greater-than within what i have written above has been mangled by slashdot, resulting in it being unintelligable at a key strategic point. that point is when script language is mentioned. it's supposed to read less-than script language equals python greater-than and less-than script language equals javascript.

Comment open to whom? (Score 1) 296

when i started the pyjamas-desktop project i assumed that the "open-ness" that is written into the mozilla foundation charter would be an inviolate quantity that they would adhere to. taking this on faith i found the python-hulahop bindings of the OLPC project to be perfect to allow HTML5 DOM to be entirely (even exclusively) manipulated *python-side* instead of using javascript.

for anyone not familiar with the difference between pyxpcomext and python-hulahop, pyxpcomext was a project funded in 2000 by the mozilla foundation to *literally* embed python - making it a peer language of javascript - *within* a firefox browser. you downloaded a whopping 10mbyte extension for either linux or windows and you could do *not* just script language equals javascript and it would work, *including* accessing the *FULL* and complete DOM manipulation functions that we normally expect to have from javascript (exclusively, as it turns out in most peoples' mindsets).

python-hulahop on the other hand is (or was) a pygtk widget which allowed one to create a GTK window that happened to have a Gecko (HTML5/DOM) engine running in it, which *happened* also, amazingly, to provide one with the full set of DOM manipulation functions, starting from a python function GetDOMDocument() and going from there to the thousands of functions one normally expects to be the exclusive monopolistic domain of javascript.

the irony is that the python-hulahop project was only created so that the OLPC team could create their own embedded browser (in python), and they went to the trouble of using just a tiny fraction of the available functionality to implement the "Go" button, "Back" button, history and so on, all using the python bindings to the internal XPCOM interface that allows direct access to the full functionality of the Gecko Engine.

one other thing is needed to be explained before we can get on to what the problem is: XPCOM was "inspired" by Microsoft COM, and it *could* have been absolutely brilliant. COM is... deeply awe-inspiringly powerful, it is that flexible and ubiquitous. you may have heard me mention in the past that COM is what allows binary Active-X components compiled *TWO DECADES* ago to still be useful and useable on modern Windows (and Wine) systems today, even though in some cases the company that created them will have gone out of business.

technically the problem with XPCOM is that they forgot to implement co-classes, meaning that the only choice available to them is to *remove* quotes broken quotes functions and to constantly upgrade upgrade upgrade. this problem is at the heart of every single complaint for the past *TEN YEARS* by 3rd party developers using the Gecko Engine in java or c++ applications. they're SICK of having to recompile their applications to suit the mozilla foundation's schedule, particularly as it is such a mammoth task and may need to be done frequently (especially due to a security fix).

so with that as background we start to get some hints as to inherent problems that have been stressing out the developers for some considerable time. ...so what did they do about it? well, they responded to the "threat" of webkit (the engine behind chrome) by announcing a "speed, speed, speed" pathological binge - this was around 2010 or 2011. the ABSOLUTE top priority became not to be "open" - even to the extent of violating the spirit *and* the letter of the mozilla foundation charter - but to be "The Best". "The Fastest".

one of the first things that were removed was a single line from a header file - a "friend class" declaration. this one tiny change was utterly profound: it was a key absolutely critical change that prevented and prohibited the python-hulahop source code from accessing the XPCOM infrastructure. without that "friend class" declaration, there was absolutely no way that the GNU/Linux distros could take the standard gecko / xulrunner source code and have hulahop get that key strategic pointer to the Gecko Engine's top level XPCOM object.

when i pointed out how severe the consequences of this ill-considered unilateral decision were to the top people at the mozilla foundation, i was told "it's not a priority. speed is our priority".

when i pointed out that this violated the mozilla foundation charter i did not receive a response.

the second problem was that a couple of the functions (XMLHTTPRequest was one of them) had optional parameters that were only available via a hack of accessing the *javascript* parameter stack. when calling the exact same function from python, c++ or java, obviously this javascript-parameter stack would be empty.

basically i was running into the limitations of XPCOM not having co-classes, but from a different angle. the concept of co-classes are what gives e.g. c++ its "optional parameters" or its ability to have entirely different parameters for any given function, and the compiler works out which is the best one to use: co-classes is a *runtime* implementation of that, and it's utterly cool as it allows one to provide binary backwards-compatibility (you simply keep the old functions as-is and just add new ones which implement new functionality), a means to implement optional parameters (you provide functions with 3, 4, 5 and so on parameters) and more besides.

so i created a patch which extended the XMLHTTPRequest function out to 5 parameters (allowing me and anyone else to call it from c++ or java). the patch was *not accepted*. instead, some blithering idiot created an amazing idea of modifying the functions to take an "optional number of parameters" parameter. imagine having to write c++ code where one of the parameters told the compiler how many parameters the function had, and you start to get an inkling of quite how spectacularly stupid this idea of theirs really was.

again, when i raised quite how strategically vital this area was, and how important it was that this be discussed (and this incredibly dumb idea not allowed), i got nowhere.

the patch went in - and it entirely destroyed the ability of pyxpcom to correctly call those functions *AT ALL*. XMLHTTPRequest - an absolutely *critical* function to the operation of about 80% of pyjamas-desktop applications - had just been utterly destroyed by some dick being allowed to make arbitrary decisions with blatant disregard for other projects that critically depend on Mozilla Foundation funded technology.

so now, if you want pyjamas-desktop with a gecko (xulrunner) engine, you are forced to use versions 1.9.11.1 (the last known good variant from a stable distribution), or xulrunner 7.0 (which briefly made its way into debian/testing for a short period of time) and is - was - only really available in binary precompiled form from activestate.com in their komodo editor.

in 2011 i made a brief effort to compile up xulrunner 9.0 in the hopes of saving python-hulahop - making it a dependency - but the thought of becoming both an upstream and downstream maintainer of such a vast amount of code *UNFUNDED* was just too much.

so the summary is: congratulations to the mozilla foundation for violating your charter - for closing your doors to developers who were actually more interested in expanding the reach of your source code than you are. may you pay for following your chosen path by learning painfully for your closed-minded thinking.

Programming

JavaScript, PHP Top Most Popular Languages, With Apple's Swift Rising Fast 192

Nerval's Lobster writes Developers assume that Swift, Apple's newish programming language for iOS and Mac OS X apps, will become extremely popular over the next few years. According to new data from RedMonk, a tech-industry analyst firm, Swift could reach that apex of popularity sooner rather than later. While the usual stalwarts—including JavaScript, Java, PHP, Python, C#, C++, and Ruby—top RedMonk's list of the most-used languages, Swift has, well, swiftly ascended 46 spots in the six months since the firm's last update, from 68th to 22nd. RedMonk pulls data from GitHub and Stack Overflow to create its rankings, due to those sites' respective sizes and the public nature of their data. While its top-ranked languages don't trade positions much between reports, there's a fair amount of churn at the lower end of the rankings. Among those "smaller" languages, R has enjoyed stable popularity over the past six months, Rust and Julia continue to climb, and Go has exploded upwards—although CoffeeScript, often cited as a language to watch, has seen its support crumble a bit.

Comment Re:I suppose... (Score 2) 82

Assuming that the obsolete compute modules are of standard size/pinout (or, more likely, that compute chassis are only produced for phones that ship in sufficiently massive volume to assure a supply of board-donors), this scheme would work; but I have to imagine that a phone SoC would make a pretty dreadful compute node: Aside from being a bit feeble, there would be no reason for the interconnect to be anything but abysmal.

the nice thing about a modular system is that just as the modules may be discarded from the phones and re-purposed (in this case the idea is to re-purpose them in compute clusters), so may, when there are better more powerful processors available, the modules being used in the compute clusters *also* discarded... and re-purposed further once again down a continual chain until they break.

now, you may think "phone SoC equals useless for compute purposes" this simply is *not true*. you may for example colocate raspberry pi's (not that i like broadcom, but for GBP 25 who is complaining?) http://raspberrycolocation.com... - cost per month: $EUR 3. that's $EUR 36 per year because the power consumption and space requirements are so incredibly low.

another example: i have created a modular standard, it's called EOMA68. it re-uses legacy PCMCIA casework (which you can still get hold of if you look hard enough). the first CPU Card is a 2gb RAM dual-core 1.2ghz ARM Cortex A7, which as you know is based on the A15 so may even do Virtualisation. i did a simple test: i ran Debian GNU/Linux on it, installed xrdp, libreoffice and firefox. i then ran *five* remote sessions from my laptop, fired up libreoffice and firefox in each, and that dual-core CPU Card didn't even break a sweat.

so if you'd like to buy some compute modules *now* rather than wait for google project ara (which will require highly specialist chipsets based on an entirely new and extremely uncommon standard called MIPI UniPro) the crowdfunding campaign opens very shortly:

https://www.crowdsupply.com/eo...

once that's underway, i will have the funding to finish paying for the next compute module, which is a quad-core CPU Card. after that, we can see about getting some more CPU Cards developed, and so on and so forth for the next 10 years.

to answer your question about "interconnect", you have to think in terms of "bang-per-buck-per-module" in terms of space, power used as well as CPU. a 2.5 watt module like the EOMA68-A20 only takes up 5mm x 86mm x 54mm. i worked out once that you could get something like 5,000 of those into a single full-height 19in cabinet - something mad, anyway. you end up using something like 40kW and you get such a ridiculous amount of processing power in such a small space that actually it's power and backbone interconnect that become the bottlenecks, *not* the Gigabit Ethernet on the actual modules, that becomes the main problem to overcome.

bottom line there's a lot of mileage in this kind of re-useable modular architecture. help support me in getting it off the ground!
https://www.crowdsupply.com/eo...

Comment didn't apply the brakes at all (what?!) (Score 1) 304

this is not a surprise. i have good 3d visual modelling ability, which allowed me to assess gaps between vehicles and drive at 30mph near curbs or bollards in width-restricted areas with an inch to spare either side, for example. i remember one day, a former partner and i, driving along a motorway. approximately fifty times throughout an hour-long journey, she would drive in the middle lane directly up to the back of a car in front at more than 15mph faster than the other vehicle, *apply the brakes* when the vehicle in front was only 8 to 10 metres away, and then and *only then* look in the side mirror to see if it was safe to change lane.

by contrast i would be constantly looking left, right and back (which is actually very tiring), would know where all vehicles were, even up to a mile away in either direction, and, using 3D modelling based on speeds and locations of other vehicles, would *predict* whether it was necessary for me to speed up or slow down in order to merge into faster (or slower) traffic in order to overtake vehicles *plural* in front. or, in some cases, whether to simply sit there happily at the speed of the vehicles in front.

now, this person - my former partner - drove an average of *four to five hours* per day like this. but if they are anything to go by, i am honestly and genuinely not surprised to hear that there are people who cannot judge distances, for whom the world is 2D, devoid of depth and the awareness that goes with it.

*that having been said*... the addition of "features" that apply the brakes without permission seem like an incredibly bad idea. i am reminded of a discussion recently... allow me to quote:

"We inadvertently built our own panic and short-sightedness into
the very systems designed to protect us from our worst impulses"

http://aeon.co/magazine/techno...

then, also, there is the failure of the three laws of robotics (yes, asimov's work demonstrated that the three laws are an *outright failure*, not a success). the three laws basically provided robots that *prevented* humanity from taking risks. on a species-level, the three laws *terminated* our evolution and advancement.

so, honestly, i have to say that if people cannot have the good sense to be sufficiently aware when driving a 1500 kilogramme object that is capable of causing death to themselves and those in the immediate vicinity, then please, with much respect and love, give them family a darwin award, be glad that they weren't driving in *your* vicinity at the time, and be glad that our species gene pool's "average spacial awareness" capability just went up a tiny notch.

Comment poisionous and risky name policy. (Score 5, Insightful) 210

i pointed this out before, but google's policy of forcing people to give their *real* names is incredibly dangerous. google set themselves up as the *authority* - the guarantor - that the person you are contacting is exactly whom google *says* they are. now, given that it's possible under gmail to register very similar email addresses (with and without "." in them) we have the potential extremely litigous situation where someone could be deceived and then sue google - rightly - for damages based on google's guarantees - safety about identity - not being properly upheld.

contrast that situation where *everyone knows* that you don't trust email. or any kind of unconfirmed interaction on the internet.

and i think this is what people felt - subconsciously - both inside google as well as outside, that there was something very very badly wrong about forcing people to both disclose but also to allow google to "certify" their identity.

the other thing is just that... google+ is... simply... devoid of excitement and interest. it feels like it's a single-track uninspiring place, with one direction that Thou Shalt Go: google's waaaay.

contrast this to how facebook operates (or how myspace operated): i realise it's information-overload, but that's *precisely* what makes facebook (and made myspace) an interesting place to be. there are several ways to get to the same stuff.

strange as it may be for someone who is alarmed at the ease by which it is possible on facebook to track someone down merely from their first name (yes i met someone at a party, couldn't remember their surname, but managed to guess their approximate age, guessed that they must live in the approximate nearby area, then used the advanced search on facebook to find them... took a couple of weeks to work out i have to admit, and no i am *not* going to describe here on slashdot how it's done...) ... ... despite that, i have to say that there is actually something useful, and just generally more... homely about facebook than their is about *any* google products. google products are just... sterile and functional. you use gmail to send mail. you use google search to... well... search. but you use *facebook* to tell everyone you know that you wiped your arse today, and that's hilarious.

it also occurs to me: i wouldn't want to put personal stuff up on google: they might index it and let people search on it. and i think that's really the key, there. facebook is closed. you *have* to have a login. your personal stuff is *not* indexed publicly in search engines.

so, sorry google: you got it wrong on this one, and you can't be trusted, even if you said you'd get it right.

Comment Re:I don't get it (Score 0) 231

Can anyone explain why empty space has energy?

blind leading the blind, here, but my non-specialist-physics background might be a bit easier to understand than someone who mentions "QCD" at you. the way i understand it is that when you have particles around, they have E.M and gravitational fields, and they have binding forces and so on at the very-close (atomic) level which kiiinda mean that if you get close to them with another particle you either get sucked in, or banged away (like billiard balls) - actually _very_ much like billiar balls, in that you have to get *really* close in order for a deflection to occur [at all] but when you do you really know about it.

and, what we also know is that in non-vacuum there are *lots* of these particles. so, relatively speaking, even in a gas like any one particle really doesn't have to go that far to get banged-up by any other particle.

in other words, your average particle or your average photon (cosmic ray equals a photon with a very high energy content) has a huge amount of "resistance" applied to it, in *all* directions pretty much. this "resistance" means we end up with solid matter (ok gases too) that *stays* solid. stable. follows newton's laws and so on.

in empty space, there is *no such resistance*. there's nothing to get in the way, nothing to interfere with particles or rays. so even the smallest disturbance when two photons (cosmic rays) happen to cross paths, or one hits an atom, can result in "smaaashhh, wheeee!" any by-products of such collisions, which would normally be instantly destroyed by neighbouring particles, preciselybecause there *aren't* any neighbouring particles, the by-products get to stay alive for much longer [possibly forever].

so my take on this is that it's not so much that "empty space has energy", it's that empty space - by *being* empty - doesn't "resist" (so to speak) the creation process of particles. *scratches head*. ... a bit like how if you have one extrovert in a party that's only just started, has huuge rooms, and nobody knows anyone else, the extrovert will stand in the middle of the room happily dancing and the very few other guests else will hug the walls, but if you have *lots* of extroverts in the room, then, well... it's just an another awesome party :)

Comment avogadro's constant and particle density in space (Score 1, Interesting) 231

throw-away comment, here :) i did a funny little bit of experimenting a couple of years back, when someone posted here an article about the density of deep space (the number of atoms per cubic metre) having been measured. anyway, remembering my o-level chemisty and i went, "hmm... that's interesting: i wonder if there's a relationship between that particle density and avogadro's constant.

so... i went... density = 7 * 10e-26, avogadro's const = 6.023 * 10e23, multiply the two together you get 4.2154. just for fun take the cube-root and oo! you get 1.6153982. now, to within experimental uncertainty of the measurements made of the density of deep space vacuum, that number should instantly be recogniseable +/- a bit, as the golden mean ratio (1.618 etc etc).

so we have a relationship - which has absolutely no quotes real quotes meaning whatsoever [ traditionally called "numerology" in a disparaging way in the physics community... ] between the density of particles in vacuum, avogadro's constant, and the golden mean ratio, in a formula that has very low kolmogorov complexity (https://en.wikipedia.org/wiki/Kolmogorov_complexity). which, as i do not have the kinds of hang-ups that the physics community has about these kinds of things, i find to be... beautiful.

and that's in and of itself enough for me. i don't care what the physicists say :)

anyway, as this is slashdot, i thought i'd happily derail the conversation with a nice bit of random semi-related nonsense, and see if anyone notices...

Comment Re:high quality hardware (Score 1) 592

there is a problem with power in the EU - it's not properly earthed.

?

as in, the power sockets in the EU typically do not have an earth pin... at all. they are 2-pin, not 3-pin. so when i sit with my aluminium-cased laptop with my feet up on the radiator, not only does the WIFI stop working, and the SSD gets spiked, but i also get a mild electric shock.

is that clearer?

Comment Re:Liberated? What about the hardware? (Score 3, Interesting) 229

You have to take steps to make progress. You can take something useful and make it more open (like librem) or you could start from scratch and make something very basic that is completely open.

You can take bigger strides towards openness and get something like Novena, but then you make other sacrifices (size, cost, performance).

I guess if you had infinite money you could make a high spec, completely opensource laptop.

interesting that you should say this :) i am taking a different approach. i am also developing a laptop where the goal is to reach FSF-Endorseability *and* high-end specs. i am doing it one phase at a time, as you suggest... however where instead of having infinite money i am instead using creativity and ingenuity (posh words for "persistent bloody-mindedness combined with desperation stroke eye-popping frustration").

sooo, i decided to go the "modular" route, but had to first create a decent hardware standard - one that will still be here in 10 years time but is simple enough for the average person (or a 5-year-old, or an 80-year-old) to use. it's based on an old "Memory Card" standard - you may have heard how PCMCIA is no longer being used? well, the case-work is still around :) so, re-using PCMCIA it is. and all the benefits of "Memory Card", you now get "Computer Card".. upgradeable, swappable, saleable, transferrable, storable "Computer" Card. ... but then, of course, because of that, yaay, you now have to design entirely new casework, not just a motherboard. talking to casework suppliers didn't um go so well, so i have to do it. bought a mendel90 6 months ago... ... but mendel90's don't do injection-moulded plastics, they do 3d-printed filament plastics. and when presented with a potential $USD 20,000 cost for creating injection-moulding (you send your STL files off, someone adapts them, CNCs out two steel halves and then a little *team* of chinese people sit there for weeks on end polishing out all the CNC burrs.... then you find out it's *completely wrong* and have *another* $USD 20,000 to pay... no wonder ODMs quote $USD 250,000 for developing laptops!!!) ... anyway so that's all completely insane, so i thought, "hmm, i wonder if you can create reverse-3d-printed moulds to do injection-mould prototyping" and it turns out that you can. so i could at least - on a low budget - make a few runs out of very-low-temperature plastic (so as not to burst the 3d-printed plastic under pressure), hell i could even use plasticine for goodness sake, just to get a proof-of-concept, *then*.... and this is the hilarious bit.... there's a girl who's been doing LostPLA home-grown aluminium casting.... *using 1500W microwave ovens* :)

http://media.ccc.de/browse/con...

so in theory i could quite conceivably even try doing the casting of the inverse-moulds for plastic injection *myself*, out of landfill-designated aluminium bicycle rims. do watch that talk: julia is surprisingly subtly funny, there were lots of jokes that the audience didn't get (not a native english speaking audience), and a few later that they did.

bottom line it *can* be done... if you make the decision, and damn well stick at it until success. if you're interested to follow along, here's the links:

* micro-desktop (launching very soon) which has the first EOMA68 module: https://www.crowdsupply.com/eo...
* the 7in tablet (due to go to assembly this week) http://rhombus-tech.net/commun...
* the 15.6in laptop (currently developing the casework) http://rhombus-tech.net/commun...

on the laptop - as you quite rightly point out is a good idea - i am going in stages. the LCD is a reasonably-priced easy-to-source 1366x768 LCD, also this means only single-channel LVDS, meaning lower-risk of EMC non-compliance during the $USD 3,000 FCC testing (yes, after 3 years, i finally found a chinese company willing to do FCC testing for only $3k - everyone else usually charges $7k to $12).

and on the CPU side, the neat thing about a modular design is: i am tracking _two_ companies that have FSF-Endorseable SoC designs. one is ICubeCorp (their current offering is a $2 (!!!) *quad* core 400mhz 55nm SoC) and the other is Ingenic. sadly ingenic's current FSF-Endorseable option only goes up to 1200x720 LCD resolution - just short of what i need. however... they will have a quad-core coming out this year. i waaant iiiit :)

so *instead* of a $15k to $20k total PCB redesign cost, all that's needed instead is a $3k to $5k budget for doing... yep, you got it: a tiny 43x78mm PCB plus some plastic trim. and users will, instead of having to throw away the *entire* laptop (or tablet, or whatever), just... push a button, pop out the old non-FSF-Endorseable CPU Card, plug in the new one (takes about... what... 5 seconds?) and you're done.

Comment Re:a better question (Score 3, Interesting) 592

because people pay apple more money, so they can afford better designers and can get better components. [longer post explains more, see http://slashdot.org/~lkcl%5D

lenovo *used* to do this when they were IBM. IBM *used* to buy the more expensive components then run them at lower clockrates, which *used* to result in much more reliable products. the thermal stresses (even during normal operation) placed on ceramic packaging causes them to develop micro hairline cracks; high temperatures also cause migration of solder as well as the heavy metals within the silicon ICs themselves.

Comment high quality hardware (Score 2, Interesting) 592

whilst i find the practices of apple absolutely deplorable - forcing people to sign up for an ID in order to use hardware products that they have paid for, taking so much information that even *banks* won't work with them - bizarrely the amount of money that people pay them is sufficient for apple to spend considerable resources on high-quality components and design.

i have bought a stack of laptops in the past (and always installed Debian on them - see http://lkcl.net/reports/) and have found them to be okay, but always within 2 to 3 years they are showing their age or in some cases completely falling apart. the 2nd Acer TravelMate C112 i bought i actually wore a hole through the left shift key with my fingernail after 2 years of use. hard drives died, screen backlights failed, an HP laptop had such bad design on the power socket that it shorted out one day and almost caught fire. i had to scramble for a good few seconds to pull the battery out, smoke pouring out of the machine as the PMICs glowed.

about 6 years ago my partner had the opportunity to buy both an 18in and a 24in iMac at discounted prices. i immediately installed Debian on it: it took 4 days because grub2-efi was highly undocumented and experimental at the time. so i had a huge 1920x1200 24in screen (which over the next few years actually damaged my eyes because i was too close: my eyesight is now "prism" - i've documented this here on slashdot in the past), a lovely dual-core XEON, 2gb of RAM and it was *quiet*. there is a huge heatsink in the back, and the design uses passive cooling (vertical air convection).

awesome... except not very portable. and no spying or registration of confidential data with some arbitrary company that you *KNOW* is providing your details to the NSA, otherwise there's this conversation which begins "y'know it's *real* hard to get that export license for your products, if you know what i mean, mr CEO".

so, when i moved to holland i had to leave the 24in iMac behind - apart from anything, 2gb of RAM was just not enough. i leave firefox open for 4-7 days (basically until it crashes), opening over 150 sometimes even as many as 250 tabs in a single window. it gets to about 4gb of RAM and starts to become a problem: that's when i kill it. on the iMac, it was consuming most of the resident RAM. i compile programs: 2gb of RAM is barely enough for the linker phase of applications like webkit (which requires 1.6gb of RESIDENT memory in order to complete within a reasonable amount of time). i run VMs with OSes for study.

so i was used to the 1900x1200 screen now, where i could get *five* xterms across a single window. i run fvwm2 with a 6x4 virtual screen, and run over 30 xterms in different places, 3 different web browsers; as i am now developing hardware i run CAD programs in one fvwm2 virtual screen, PDFs in the ones next to it, i run Blender in one virtual screen, OpenSCAD in another, firefox in another, chromium in yet another, then i have to view and manage client machines so i use rdesktop to connect to those (move over to a free virtual window area to do it) - the list goes on and on.

so i figured, "hmmm laptop... but with good screen. must have lots of RAM too, minimum 8gb, must have decent processor". i then began investigating, and found the Lenovo Ideapad. great! let's buy it! .... except their web site crashed. so i then - reluctantly - began investigating iMac laptops. 2560x1600 LCD, 8gb of RAM, dual-core dual-threaded processor: $USD 1500 and *in the UK*, with a U.S. keyboard so nobody was buying it. researched it, saw the success reports of people installing debian on it, knew it could be done: sold, instantly.

so now i am extremely happy with this machine - not with apple themselves - but with the hardware that i have. it's light, it's fast, it's a sturdy aluminum case, the fan only comes on if i swish large OpenSCAD models around in 3D (or if firefox gets overbloated as usual).

the only downsides i've had are as follows:

* despite having an intel graphics chipset, it's so new that video playback is not supported. i had to set VLC to use "OpenGL" as the playback option, installing the accelerated opengl drivers (which worked)
* there is a problem with power in the EU - it's not properly earthed. this results in *massive* EM interference that spikes the SSD controller, causing hard resets once a second. those cause an entry to be written to /var/log/syslog, which then causes another failure, which results in another entry and so on. to solve this i had to follow the majority of the read-only rootfs instructions normally reserved for embedded systems: move *everything* out of /var/log into a tmpfs. i hacked it into /etc/rc.local which is not recommended but does the job
* the powerbutton by default causes a power-off, i often press it accidentally and haven't worked out how to disable it.
* the camera is a proprietary PCIe device from broadcom that has not yet been reverse-engineered
* EMI power spikes often cause the wireless to be a bit flakey as well (understandably). solved by putting in a high-power linksys router and backing it down to 802.11b.

the 2560x1600 screen however is absolutely fantastic. i can now get *TEN* 80x50 xterms on a single screen. i am currently running firefox at 1600x1300 in one window and have room for *FOUR* xterms to the left of it. i have all the multimedia-related applications (alsamixer, qjackctl, VoIP) on their own dedicated virtual screen, whereas before they had to be spread out across at least two.

and the funny thing is that even with the tiny font size, i am not straining to look at it. i got used to it within about 4 hours.

so, this is the machine i will stick with for at least the next 5 years. i simply won't need to buy another unless i start doing something radically different.

Slashdot Top Deals

Say "twenty-three-skiddoo" to logout.

Working...