Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Never touched this one (Score 1) 179

The Sinclair ZX80 and 81, and the TS1000, were really neat machines. The Z80 did double-duty in those, and directly drove the display (FAST and SLOW modes, with FAST being, well, a bit odd). The keyboard was, well, a bit odd is putting it mildly.

My first home computer was a VIC-20, primarily because the TS1000's keyboard was so bad, but I couldn't afford the TRS-80 Model III that I really wanted. The VIC-20 and C64 had real keyboards, and that was a very big deal. The membrane keyboard of the TS1000 was not good at all. But I was and am a Z80 assembler programmer to the core, and so while I had and used the VIC-20 a lot (even built my own memory expansion card with 1Kx4 (2114) static RAM chips) I never really got into 6502 assembler coding.

Good high school memories.....

Comment Re:North Carolina... (Score 1) 398

White trash, eh? I know a few high-tech companies that would disagree with you.

You know, small companies that you've probably never heard of, like:
Google ( the Lenoir NC data center is featured: https://www.google.com/about/datacenters/inside/streetview/ )
Apple (Their Maiden, NC, data center is a model for green data centers: https://www.apple.com/environment/renewable-energy/ )
EMC (Not only do they have a huge datacenter/Center of Excellence in Durham, which earned LEED Gold status ( http://www.emc.com/about/news/press/2013/20130314-01.htm ) but they also manufacture storage arrays in their Apex plant ( http://www.emc.com/about/news/press/us/2006/08082006-4543.htm ) and have a significant R&D presence in RTP)
Facebook ( The Forest City Data Center: https://www.facebook.com/ForestCityDataCenter ) Oh, and Rutherford County is very rural.

Further, North Carolina has one of the world's premier research and education networks, NCREN ( http://ncren.net/ ), which just underwent significant expansion over the last two years.

And the list of high-tech and higher education excellence goes on and on.

North Carolinians even know about Slashdot. :-)

Having read the actual article, and not the biased summary, it seems a reasonable decision for the director to make. There is a place for that type of documentary; and it would certainly be a good thing to show in the right venue. And I'm sure the director had a difficult time with the decision.

But, then again, just exactly what does Slashdot commentary have to do with the scientific process anyway? (Yes, I do understand real science, and I also don't have any need to prove that to anyone).

Comment Re:I suspect it is bcos of HP's TCPA connection (Score 1) 243

Yes, it probably could.

There are a few folks that have looked at it. It will likely be difficult to do, as you'll need to bootstrap your way up from the corresponding Fedora releases, beginning, IIRC, at Fedora 9 (that's the last Fedora with IA64 support).

So you'd probably need to start at F9, and chronologically build F10, F11, F12, and F13. C6 is based off of a mix of F12 and F13. Oh, and major things happened between F9 and F10. The tool that sits at the guts of this is called mock, but it's a bit 'fun' to get started with. 'Chronological' in this case means by source RPM build timestamp. You'd start with a minimal buildsystem, and get that up to the desired release, then build the other packages. Mock makes this sort of build relatively straightforward, but it won't hold your hand and figure out for you when you need to rebase the buildhost itself. Oh, and while Fedora does have a stated goal of being self-hosting, RHEL does not. This is part of the reason it took so long for the RHEL rebuild projects to get version 6.0 out the door.

Complicating things is the fact that you're going to be on your own with any patches you need to make to the upstream source; when upstream (Red Hat, in this case) is supported an arch it's not too bad; but RHEL6 has no IA64 in the source, and with the kernel especially that might be hard.

To give a rough benchmark, just getting from Scientific Linux/CERN 5u4 (the last free RHEL rebuild for IA64; CERN dropped it at that point, even though Red Hat still builds for IA64 on EL5) up to CentOS 5.8 took close to a month of build time. I've built 5.9, and have 5.10 on my plate. With IA64 support in the source it's pretty mechanical, but even then there are challenges. Like composing install media, which I've not done as yet (I've just used the SLC5.4 install media, then rebased the repos to my own C5 repos, and used yum to update over to C5).

Again, the biggest stumblingblock to C6 on IA64 is going to be maintaining any IA64-specific source patches, both in the build spec files and in the sources themselves.

But feel free to get involved. :-)

Otherwise, Debian is a good alternative, but with the mainline kernel losing support for IA64, it might get tough.

I have the manpower to maintain our own in-house CentOS 5/IA64, but not to bootstrap C6 onto IA64.

Comment Re:EPIC failure (Score 1) 243

MIPS V apparently never actually hit silicon. R10K, R12K are still MIPS IV. (I have some SGI kit with those, a couple of the purple Indigo2 IMPACT systems, and an O2.....)

If you're thinking MIPS64, well, you can find that in embedded devices, routers, etc. Look for Cavium Octeon processors.

See https://en.wikipedia.org/wiki/List_of_MIPS_microprocessor_cores for which silicon is MIPS64.....

Comment Re:I suspect it is bcos of HP's TCPA connection (Score 5, Interesting) 243

Red Hat Enterprise Linx 5 is still available and supported for IA64. At least at the moment; this will give IA64 users a Linux soure base at least until 2017.

I have personally rebuilt CentOS 5 from source for SGI Altix, which is an IA64 box, and am running a smallish Altix (30 CPU's, 54GB of RAM) in production for data analysis. (NASA's Columbia supercomputer was an IA64 Altix with 10,240 CPU's.....)

But RHEL 6 is indeed not available for IA64.

Comment Re:Ardour (Score 1) 223

[I know I'm a day late....]
Harrison Mixbus. Commercial Ardour-derivative with fantastic sound, put out by Harrison Consoles, manufacturer of seriously high-end hardware. See http://www.harrisonconsoles.com/mixbus/website/ for its website. Runs natively on Windows, Max OS X, and Linux, and pretty much equally well on all three (runs best, IMO, on Linux, but I use it most on OS X due to some plugins I like).

Comment Re:$20 bundle (Score 1) 314

...compared to free,

Today's caviat is that I admit not knowing how much OS X 10.8 cost off the shelf.

I paid $19.99 for both Lion and Mountain Lion, straight from the App Store (need to be at 10.6.8 first). Snow Leopard was a bit more expensive; I think it might have been $29.99 for the DVD. Leopard and prior were (and are, on eBay at least) quite a bit more than that.

Comment Re:Fix HD First (Score 1) 559

You have a point, but you lost credibility when you included OTA in that list. OTA is uncompressed 18.2mbit MPEG.

You lost credibility when you called MPEG 18.2Mb/s 'uncompressed.'

True uncompressed 1080p 24-bit color video requires a bit over 1.4Gb/s of bandwidth (do the math: 1920x1080 pixels; 24 bits per pixel; 30 frames per second = 1,492,992,000 bits per second.

18.2Mb/s is 82:1 compression. The discrete cosine transform is good, but not good enough to yield lossless compression of the video if you're constrained to a fixed bitrate. Especially in poor signal areas. And the typical 'HD' OTA station will only use 7Mb/s for their 'HD' output.

Uncompressed 720p video fares a bit better, only requiring 1366x720x24x30 = 708,134,400 bits per secons (around 700Mb/s), and so that typical 7Mb/s HD stream is actually 100:1 compressed (it's near lossless, and with static, blocky, content it can be completely lossless, but rapid motion means lost bits and a lossy section of the compression for a fixed bitrate. And so you see artifacts; grass, with its texture, is a near worst-case scenario for the DCT-based codecs, and so artifacting on grass is most definitely worse.

The 2008 revision of ATSC includes H.264 as a codec; I can 'see' H.264, and, like the GP, it does bother me.

Comment Re:There really is no point (Score 1) 559

My RPoV in one of my eyes is much, much shorter than the average, and I use it to good effect, particularly when splicing fiber optics. Don't need a magnifier; with an RPoV of around 6-8 inches (depending upon the time of day) I can see the squareness of a cleaved fiber without magnification (although as I approach 50 it's not quite as clear as it used to be, but, hey, such is life).

That of course has its downsides, like the heavy and thick lens on one eye...... to read my Droid Razr's screen, it does look a bit silly to lower my glasses and hold the screen a mere ten inches from my face, but that's the most comfortable for me..... It's almost time for bifocals, which I've put off way too long.

I'd love to have a 32 inch display with 250+ pixels per inch..... (that would be roughly an 8k display; 7680x4320, or 4320p). Everything looks smoother, and you can tell the difference when the display is close. And I'm talking within 25 inches of my face close, here; I want the screen real-estate for my multitude of open windows.....

See https://en.wikipedia.org/wiki/List_of_displays_by_pixel_density

Slashdot Top Deals

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...