"it is now under four feet of garbage in a landfill site the size of a football pitch."
WHERE?? WHICH ONE, DAMNIT!!
"it is now under four feet of garbage in a landfill site the size of a football pitch."
WHERE?? WHICH ONE, DAMNIT!!
Being able to replace the core of your tablet doesn't fix sctrached screens, aged batteries, and general wear...
you need to remember to view this from both sides. it's possible to replace *either* the CPU Card *or* the chassis, and in each case you have significant advantages and lower costs.
when did you ever buy a hermetically-sealed product that you could upgrade? the clue is in the word "hermetically-sealed"....
and any tablet that you can replace something on is going to be thicker
true. i have a design which uses PCCARD 3.3mm. if you have around $250,000 for the tooling costs for all the parts (assemblies, housings, sockets, casework) i can get it done... maybe in about 6 months time. or... we could use off-the-shelf parts and get immediately into production.
which would you prefer? perfect waffle-ware - more expensive due to the investment and NREs - or actual product that's reasonably-priced because there's no investor overheads?
and less "tablet like" than a 'nice' current tablet.
simply not true.
The whole idea of the EOMA concept should (if/when it takes off big) mean that you won't have to "hope the laptop shell's $ATTRIBUTE is $VALUE".
you know what? whoever you are, foobar bazbot, i'm amazed and delighted to see that you clearly Get this concept. there are a couple of things that you left out:
1) from a CPU Card manufacturer's perspective, they love the fact that a short-lived SoC in a ready-to-go pre-packaged product can be sold in much bigger volume because it's shared - for the relatively short duration that the SoC has its day - across potentially dozens of mass-volume products.
2) from your perspective (1) translates into cost savings due to the CPU Card manufacturer being able to take advantage of stable huge volume pricing, as well as the Foundries, having larger orders, being able to dedicate and optimise a fab to get the yields up. both the volumes and the better yields automatically (one might hope!) translate into lower pricing
3) from a cost perspective, the fact that there is about an extra $6 on the BOM when compared to a monolithic product... this is *completely* dwarfed by the immense cost saving when you buy one or more EOMA68-compliant "chassis" and share a single CPU Card between them. laptop and tablet are the two obvious examples, with the clear additional benefit that applications and data transfer conveniently *between* the products.
4) from an environmental waste perspective, EOMA68 significantly reduces e-waste by making it possible to re-purpose older CPU Cards down a chain. today's latest-and-greatest laptop/tablet CPU Card becomes tomorrow's router/NAS/SoHo server CPU Card.
so there is an enormous amount going on here in what appears to be an otherwise unobtrusive "wtf??" moment. i haven't begun to describe the benefits to the linux kernel developers yet (but have posted a number of times on LKML explaining the N CPU Cards plus M products instead of N*M monolithic designs.)
The announcement and website clearly state that the feature board which the EOMA68 docks to is open hardware; yes the A20 is not open hardware, and that was never stated otherwise.
there's nothing to stop anyone from creating OSHW EOMA68-compliant CPU Cards. a good starting point for anyone wishing to do so would be Dr Ajith Kumar's work on a GPL-compliant KiCAD board, or any one of the boards from TI or Freescale which have full schematics and even CAD/CAM PCB files - complete - available.
as for other CPU cards, those are further away but on the roadmap.
they are indeed. tracking down a cost-effective desirable SoC from - and this is also a really important bit - a fabless semiconductor company that respects the GPL - is very very hard. let's go through the list so far of CPU Cards that i've 30-98% made the PCB CAD/CAM drawings for (the A20 one is the only one that's reached 100% completion so far)
* AM3389 CPU Card. GPL-compliant: yes. cost-effective: most definitely not. desirable: well, it turned out that there was a proprietary blob for HDMI, and it was to be an FSF-Endorseable CPU Card, so no.
* iMX6 CPU Card. GPL-compliant: yes. cost-effective: at $35 for a quad-core SoC in 1k volumes when the competition is $USD 12: mmmm.... no. desirable: yes.
* Ingenic jz4760 CPU Card. GPL-compliant: yes. cost-effective: yes (around $7). desirable: as it's only a 1ghz single-core MIPS with no HDMI output... mmm... no not really.
* Rockchip RK3188 Quad-core CPU Card. GPL-compliant: no. only "leaked" source code is available. cost-effective: yes (around $12. for quad-core! amazing). desirable: yes (good features). but, the GPL-compliance nixes it. that and the huge NREs demanded by rockchip for their development board details.
the list keeps going on and on like this. much of these issues go away once we have some sales. so if you'd like to see this project succeed, help out by buying one of these engineering boards. in the future you'll be able to re-purpose the old CPU Card by getting an alternative chassis (just the chassis), or you'd be able to sell the old CPU Card on ebay.
windmills - chaaarge!
reactos was the real reason why i ported samba-tng to w32, using mingw32 to compile it up. worked absolutely great. unfortunately you cannot effectively run samba-tng/w32 under windows (without changing the port numbers) because the ports 137, 138, 139 and 445 as well as the critical NamedPipe services are already occupied... by microsoft's implementation of SMB as well as microsoft's implementation of the critical MSRPC logon services (LSASS, NETLOGON and so on) without which it would be flat-out impossible to even log in to the box in order to see if the services were running!
likewise unfortunately because wine has had to implement MSRPC (completely independently), although it would run successfully you likewise would have to change the MSRPC pipe service names as well as the TCP and UDP port numbers of the endpoint mapper (port 135) because wine has had to implement \PIPE\winreg, \PIPE\srvsvc and many others which are *also* implemented in samba-tng.
the amount of cross-over between samba, wine and reactos at the core fundamental networking level (much of NT's design was based around networking and RPC services, even when run as a stand-alone system), is just crazy. especially when you consider that it takes about 250,000 lines of hard-core intensive c code just to get even the _fundamentals_ of MSRPC correct. it's been over twelve years so i've had to stop letting people know about the duplication of effort and just let them get on with spending their time learning the hard way that they're working on exactly the same thing... without sharing any effort between them.
there's some absolute golden nuggets in amongst the wine/reactos code. periodically - every few years - i have a go at extracting the DCOM implementation from wine - to build a stand-alone GNU/Linux + w32 DCOM library. the last person who tried that called it "TangramCOM". he forgot to commit some critical bits to the repository (such as the IDL compiler). if anyone's ever worked with DCOM at a high level (using e.g. python) you'll know that it's just stunningly easy. DCOM was - still is - why microsoft has been so insanely successful after all this time. the equivalent in the MacOS world is ObjectiveC, which achieves similar results (without the networking) at the compiler-level which is pretty ambitious and nuts but highly effective all the same.
ahh, what can you do, eh?
i'm just going over the batch of OCZs that we had to pull from locations all over the world. the cost of the recall was far in excess of the cost of the drives. over 200 of them. if you have an OCZ Vertex drive with firmware revision 1.11, it *will* fail spectacularly. all you need to do is set up 64 sets of parallel writes, run them for 10 minutes, and you *will* get data corruption. you can do this in a shell script (i used python) by spawning "cp -aux" of a directory hierarchy with 1500 subdirectories and 3,000 small files. 64 parallel sets of copying (and then deleting) i.e. if you do around 1.5 million file-directory creates and deletes you are *guaranteed* to have data corruption.
the strange thing: the very first Vertex OCZs released were absolutely fine. what i learned just yesterday was that *even* with a drive that has been consistently failing, if you downgrade its firmware to revision 1.7 *it becomes absolutely fine*.
the problem that we have is that upgrading units in-the-field when the firmware upgrade system provided by OCZ is an ISOLINUX cd image with FreeDOS and a firmware-flash program is going to be rather tricky when none of the systems have a screen let alone a keyboard.
by contrast, we have somewhere around 500 Intel 320s installed world-wide. there have only ever been 3 failures.
for the selection of the new drive (Intel 320s are end-of-life) i'm endeavouring to replicate that test system which was reported on slashdot to have destroyed 12 different SSDs within under an hour per drive. i have managed to destroy one already: Crucial M4. it took 2500 power-cycle interruptions (the program's still in development) so the M4 failed in under 24 hours. so don't get that one. still on the list: Innodisk 3-MP Sata Slim, Toshiba's new SSD, and Intel's new S3500.
the toshiba i can already tell you, if you interrupt its power you will find that, on power-up, some of the outstanding write requests will *not* have been actioned. this is partly good news: it means that the drive is detecting that it doesn't have power, so doesn't risk corrupting the drive. i'm looking forward to properly testing the 3-MP because they're cheap, small, and the datasheet has, unlike any other manufacturer, a heck of a lot of details about how they actually do power-loss protection. most other manufacturers don't even bother to mention power-loss protection, that's if you can find a proper datasheet at all.
i did an analysis of the Quark X1000 based on the Galileo schematics, and the assessment isn't good:
the key failure is that there's absolutely no I/O multiplexing. given that intel actually designed the PXA series of ARM processors before selling them to marvell you have to wonder what was going through the minds of the engineers behind the Quark X1000.
the main points of the above link which automatically and very unfortunately make the Quark X1000 a complete failure are:
1) there's no video outputs, and the only options are USB2 (DisplayLink with no 3D capabilities and too slow to do video), SPI (for character-based LCDs) or PCIe. to match a 0.4 watt processor with a 20 watt 3D PCIe Graphics card is completely insane. there are therefore no good options for video display of *any* kind.
2) there's no "industrial" or "embedded" style GPIO. no CAN bus, no PWM, no ADC, no DAC. there's also no audio. there's not even I2S and there's certainly no SPDIF. so to make up for that lack you'd have to add something like a Cortex M0, M3 or M4 embedded controller... and given that those usually come with built-in Power Management, NAND Flash and SDRAM, for the majority of purposes where you'd need to use an embedded controller with a Quark as a GPIO expander you'd be better off, cost-wise, with... just the embedded controller.
overall then there really aren't *any* markets that this chip could be useful for. if i'm wrong about that, and anyone can actually think of good uses for it, please do speak up.
there are too many bugs in btrfs for it to be installed in production:
especially this one, which has yet to be resolved:
which is a major useability issue. yes i made the mistake of installing btrfs on a live production system.
i was speaking to someone who works in aerospace: they have deep concerns about the geometry shrinks in the chase for extra storage. the smaller the geometry gets, the less reliable it gets, it's as simple as that. they are having enormous difficulty getting hold of large-geometry small-capacity NAND flash ICs.
also, i've begun to replicate the drive-torturing software which was mentioned a few months ago here on slashdot. one SSD i tested which is reported to have good power-loss protection failed in THREE minutes. another took 24 hours and 2,500 power-cycles.
if you've seen the film with nicholas cage, it highlighted for me for the very first time that the U.S. Constitution was written by some extremely fore-sighted people. there are specific words in it which not just permit but *OBLIGATE* you - each and every american citizen - to overthrow any government that has become tyrannical or otherwise lost its way.
given that america has such a significant hold over the rest of the world, *i* as a UK citizen am obligated to point this out to you, because by not doing so it will have an adverse effect (through erosion of sovereign rights of each and every country - erosion initiated by the corrupt U.S. Govt infrastructure) on *my* country to whom *i* hold allegiance.
so - get to it, americans - get your act together!
tsk tsk - he should have put in a freedom of information request instead.
yes. many people are unaware of the fact that these major power plants - coal, gas, oil, nuclear - are only efficient when they are at maximum capacity. if you shut them off for any reason (and this can be done fairly quickly), getting them back up to temperature can take *weeks*.
so any investor is going to want guarantees that the power plant in which they're to be investing billions will provide a guaranteed return on investment. even in cases where there's complete catastrophic failure [hey, what's insurance for, huh?]
btw as an off-topic aside, the reason why wind power is a failure even before it becomes popular [which it won't] is because its power provision is completely arbitrary. in fact, it's not very well-known but the wind systems in scotland where i used to live were heavily subsidised. the UK Govt pays them 25 thousand pounds A MONTH to NOT run them. in fact, as they're motors as well as generators, when it's not windy enough, from what i hear they're actually POWERED to make them LOOK like they're generating electricity, so that people don't wonder why they're not running.
wind turbines. only operational at between 8m/sec (about 24mph) and 24m/sec (about 70mph). below that there's not enough wind to make them turn. above that they're dangerous (one blew up in wind-speeds of 150mph last year - made a great photo in the local scottish paper). and yet people insist on commissioning wind-turbines based on a 100% operational capacity.
thanks thegarbz - i didn't mention that i added in pyzor and razor, and i think clamav as well. also as my domain's been up for a while it does receive a considerable amount of spam. the load just got to be too much. i'll investigate alternatives and also bear in mind that spamassassin worked well for you.
Give a man a fish, and you feed him for a day. Teach a man to fish, and he'll invite himself over for dinner. - Calvin Keegan