Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:PS3 (Score 2) 276

(Note: this is speculation, I never asked Nintendo about these, nor did I had any access to whatever internal documentation on RAM for the Wii - much less for WiiU)

Like I said, the 64M GDDR RAM of Wii is very quick, but optimized to be read and written in big chunks. It's not meant for per-byte random access.

I don't have the specifics at hand, but RAM is read in big chunks (32 bytes, 256 bytes, I really don't remember), and kept in cache from there. As long as you stay in that cache, it's all right.

Also, that RAM is used for GPU, DMA, and everything else external to the CPU. So its bandwidth is split between everything in the console. That means crappy bandwidth, split at controller side. That's GDDR RAM, tied down to the graphical chip, and that one has priority. That's why they added up so many features to transfer data back and forth from RAM. That way, you can do bulk transfers in background while the CPU is happily churning something else. Would it be that fast, there wouldn't be such tools.

On a side note, I'd take OP AC with a grain of salt. However, HDDs do transfer quickly when accessing data sequentially. So I don't say it's impossible to have the same CPU-side bandwidth. You are totally right about latency, that said; HDDs are very slow to seek.

Yet again, reminder this is for Wii, I have no idea how the WiiU works.

Comment Re:PS3 (Score 4, Interesting) 276

OP AC:
I used to code for Wii. Haven't coded for WiiU. So I cannot tell, only extrapolating from what you are saying here.

However, what you are giving as info is mostly the same than Wii used to have. I expected they kept full compatibility between the WiiU and the Wii, so they could emulate the system. That probably explains the chips.

Your PS (Paired Single) experience is mostly what I would expect from a newbie assembly programmer. Sorry. Yes, it's very hard to code PSes but once you get the hang of it, it's very efficient.

As far as your memory experience, I would expect the WiiU to use the equivalent from the Wii, meaning they have a very fast internal memory, and a cacheless external memory. It's powerful if you understand how to work its magic, and you need to know how to use caches or other accumulators to transfer data.

Not saying it isn't a pain. It is. Especially if you want to code as a general purpose guy (big company), with compatibility on multiple platforms. Most multiplatform have one kind of memory, so it expects fast and efficient RAM for its whole game. However, if you code solely for the WiiU, and have a background in Wii or in GameCube, you'll feel right at home I'm sure. Read your comments, and it all rang bells.

LordLimecat:
It would make sense if the WiiU uses the same system than the Wii. Wii uses 2 kind of RAM, first one is very quick for random access, but you have very little of it. Second one is very quick for sequential write access, but horribly slow for random read access. Depending on tests, you can get magnitude of slowness in that kind of RAM on Wii. Now, I don't have experience in WiiU (and even if I did, I would keep this confidential, to be honest), but I do feel in a familiar place. :)

-full disclosure- Work for EA, all info here was double-checked for availability in the likes of Wikipedia and Google. Opinions are mine.

Comment Imagination? (Score 0) 279

Funny how on Mac it always was Applications, on Win it was always Programs, on Lin it was always Software, and then you got all the different variations for all the platforms everywhere. Some use Ware, some use Soft(pedia for example).

Then Apple starts using Apps, coins the App Store to go there, gets the most talked about platform, and somehow it now has become "common sense" to use Apps for everything, and the only place to get an App is on the App Store.

Imagination, people ... come on! It's a freaking term, coin your own! Soft Store, Get-A-Ware, don't know what. And although I understand Apple in their stance, I find it funny and ridiculous. It reminds me of Microsoft Bookshelf.

Comment Re:I find it odd (Score 2) 244

You see, that's exactly the kind of things people should never have to hear about a product. If I get a product, whether at $0 or $10,000, it should always be responsible for its own integrated tools.

Let say I buy an integrated specialized medical database using Oracle as backend. First, I shouldn't really have to care it uses Oracle. Is the product working or not? Yes or no. The reason why a specific request would fail "because its an Oracle bug" is moot, the vendor decided to use Oracle, it should vouch by it.

Let say again I buy M$ Outlook. It uses M$ Jet as its backend. Should I really care? Absolutely not! Actually, you learn about that part when you (used to) go over 2GB and the system would balk with a corrupted archive. To have the vendor tell me it's a Jet bug shouldn't be taken seriously, they chose to use it, they live with the limitations, and it now becomes an Outlook bug.

Same for Chrome. I decide to install Chrome on my computer. It uses WebKit. It comes bundled with multiple DLLs and tools, D3DX, Gears, AVFormat and so on. Some are even signed by Google themselves, some files even contain Flash provisions inside them. They should vouch for what they have, and actually consider their bundled tools as part of their software, no matter what.

(extrapolation) I wonder how it would go with my mom, trying to make her understand that she uses a software she installed, but the fact her computer became infected with malware is because of some extraneous tool she unwittingly installed at the same time she installed Chrome, is part of the default package, and is bugged down. :) She'll remove Chrome and never go back to it because it's ITS fault. :)

Comment I find it odd (Score 1) 244

A company takes care to actually go through code, assembly, source, any means really, figure out a hack that's specific to Chrome ... and somehow, they are the ones misunderstanding the code. Somehow that answer doesn't satisfy me :)

Also, the answer would be equivalent to having my code use Sqlite as a dll, I bundle it in my package, I install it, it's mine ... but somehow when someone hacks my application through a (very theoretical - example only! move on trolls ;) ) sqlite bug, I would have the exit door saying "oh yes, you can hack my app, it's defenseless, but it's not my fault, it's sqlite here! *points*"

Please ... Chrome ... You bundle it, you vouch by it, you got hacked, you recognized, don't start making excuses please. It's no big deal, it's only a bug, like there are countless in ALL applications throughout the world.

Comment Re:Wakeup call US? (Score 1) 174

Mmm, well, there is the PCI standard that's supposed to protect you against such things, disallowing ANY kind of credit card number keeping. I guess Sony weren't PCI compliant, and I guess this is why they are being checked by all these groups, because such thing should've never happened, at least for the CC#. I know, I had to go through that test last year, and it's quite secure.

For the account info, that's something else, they screwed up and that's it. Let me guess, their passwords were sent through a SHA-1 or some other crappy password verifier. :)

Comment Re:Late Again? (Score 1) 162

For me, it strangely reminds me of Hypercard ... and thus of current Mac OS X Automator. If you want even easier, I'd even go to Quartz Composer. Yet again things pioneered by Apple. I wouldn't create a full 3D game using that scripting system, it's totally different and not meant for the same thing.

People simply don't understand high-level versus low-level; both has merits.

Comment Here we go again (Score 1) 500

Is this the old geezer versus the new wet diapers yet again? (trying to be as evil on both sides ;) )

There are new technologies and we should embrace them. I am not a proponent of VMs, I don't like them in general, but I do see its uses and it's very effective. Like in C++, you got STL, with very similar and nearly interchangeable std::vector, std::list, std::deque and so on (and not talking about boost or 3rd parties here). You need to know when to apply them or else you'll get problems. Well, in the '10s, you have the same ridicule amount of technologies available to sysadmins, and you need to know when to apply it. That's the new Sysadmin job, not only know that you can code one in bash with grep, awk, echo, while read, pipes and rsync, but actually know there is a package all neatly made for you, available at your fingertips with a simple apt-get (or yum).

I keep my computer tidied-up, I love to know what runs where. Even then, I do a "spring cleaning" once every year, reinstalling everything. And incredibly, my computer runs faster and more efficiently. Why? new /etc defaults, new parameters, new software, old clinging software, things that are nearly impossible to update. Same for the files. Seriously, in today's computers, we get hundred of thousands of files, most of which have some arcane use we couldn't care less, but are necessary for some kind of weird reason. I'm a sysadmin, and I don't pretend to want to know all these files.

I read the article, and yes, there are things that are changing, and seriously, I do respect the One person who can understand the Sendmail configuration files... oh I'd even be impressed with the M4. :) And when there is a problem, I want to know why, because I love to learn. But then ... there are prerogatives, time constraints, servers need to be up, people need to work, and we have all these magnificient tools that will enable every computer to be segregated in their private little VM world (to return to that main article). So should be simply shrug, laugh and go back to The Ancient Ways? You can keep you "vi" editor, leave me my "vim", please. :)

Comment Re:Not better than the others (Score 1) 705

It scales horribly
It takes more work
It takes more time down the end ... but then, I don't have a %(#)load of users sending me e-mails, calling me (off-site), coming to see me in real life, and I don't have to send a message to all the clients and users saying it's known, and I don't have the company managers telling me they're in a crunch or whatnot.

95% of users coming to my desk in these mere minutes a server is down:
- hi
- hi. Yes?
- how are you?
- not too bad, you?
- going well. I got a question for you?
- yes?
- Are you working on whatever server?
- yeah, it's down. Restarting it now, there might be further downtime later on if I see a solution, and it might require further downtime. Hopefully not.
- oh ... ok, thanks
- no sweat.

So I'm sorry, I'm really inexperienced I guess, I just work with active servers with users happily churning in them, and are to a point mission critical, for them at least. And FWIW, I'm merely a non-jerk and non-God-complex sysadmin that tries to answer to users with proper English and not shoving their little problems down their %/%/.

And yes, SVN might sometimes cause problems, especially when linked to these buzzwords: SASL, LDAP, NFS, VM and 1TB worth of data.

Comment Re:Not better than the others (Score 1) 705

And I agree with you totally it should not require a reboot, reboots should not happen.

Like in Windows ...

But in real life, they do. And sometimes, once it's determined what the cause might be, it's much easier to do a "shutdown -r 0", wait for 30 seconds for the server to stop, wait for 2 minutes for the server to come back up, look at the log, see everything is peachy, and THEN solving whatever went wrong than taking 2 hours of my life with users poking me every 30 seconds because the server is down.

Example that just happened today: a NFS server was having problems with some actions... not all, but some very rare admin actions. We restarted the server because it was hung, with stale NFS mounts galore, and application hung because it was unresponsive.
Rebooting the whole system meant all processes at least tried to stop, and then they restarted correctly. After further investigation with whatever was causing problems, we understood one of our new servers needed nolock in clients due to a software glitch. We remounted with additional options, and it kept running along from that point on. Next step is to do the right thing and correct the NFS software bug ... and once we'll have the latest software update, we'll allow locks yet again.

Total downtime: 5 minutes. Caveat: for 2 hours, users couldn't do a particuliar Admin command. Now: fine ... Eventually: server will be back up and ok.

Comment Not better than the others (Score 2, Interesting) 705

Quotes from stupid people:
You should never reboot a Mac, it's not like Windows.
You should never reboot Unix/Lunux, it's not like Windows.

Well, you shouldn't reboot Windows either. You reboot it when it goes sour. Our Windows servers seldom go sour, so we don't reboot them. Same for Mac or *nix.

Problem is when it starts to cause problems. Like our /var/spool partition deciding it has better things to do than exist... or the ever so important NFS or iSCSI mount that decides to Go West, and gives us the ??? ls we all dread ... with umounting impossible, so remounting impossible, and all these stale files and stuff. You either tweak these things for hours cleaning up all processes, or you reboot.

In fact, being a good sysadmin, all my servers are MEANT to be rebooted if something goes sour. One SVN project goes sour? check if it's not the repository itself that got problems, or if the system needs to save something to safely exist ... and if not, reboot the server. Everything magically restarts itself, does its little sanity check, and a quick look at a remote syslog to make certain everything is all right. 2 minutes lost for everyone, not 3 hours of trying to clean up mess left by some stray process somewhere or trying to kill the rogue 100 compression and rsync jobs that got started eating up all RAM, CPU and network.

Since all our servers are single processes and are either VMs or single machines, it's a breeze to do this. iSCSI will diligently wait before the machine is back up before trying to reconnect. NFS will keep its locked files up, and will reconnect to them. No, seriously, everything simply reconnects!

Of course, the idea is to minimize these occurences, so we learn from it, and we try to repair what could've caused this problem in the first place. And there's a place to do this in a server crash postmortem. But no need to make users wait while we try to figure out wth.

Comment Worst case scenario: 2 hours (Score 1) 328

Technically, my computer will last between 2h of Minecraft, up to 5-6 hours or normal use... My Internet connection with VOIP phone, wireless router and stuff will work for 2 good hours. If I'm recording stuff on the boobtube or if the PS3/360/Wii are up, then it drops down dramatically to 20 minutes (But enough to save my game and kill everything ASAP).

At that point, I'll be able to tether to at least 2 different devices, so by being smart, I could work up the whole 6 hours of battery with Teh Intarwebz.

Then ... I'll have to resort to my iPad ... which can provide me quite a lot of time before it dies.

Finally, for the worst, I could still technically do Tweets or light Net browsing using Packet Radio (I'm one of these Radio freaks with their permits ... ;) ) with battery powered devices, although that device and the computer that could work this out are in boxes deeply buried underneath Cobblestone and Gravel. And for that, I have dozens and dozens of charged AA batteries everywhere, it wouldn't be such a problem.

But is it really final? Nope ... With some illegal electricity works, I could plug my Prius 2010 battery on the house, and have quite a lot of energy to start up some emergency stuff, including charging back the UPSes. And with the gas I have in my car, I expect to be able to get a good 6-10 hours more of emergency household energy, giving me that additional 6 hours, and a whole new cycle.

AND then is it YET final? Nope ... well ... yes for now ... but I'm currently in process of purchasing a house with 2KW solar panels. Not enough to have a fully working house, but enough to give emergency power to the most important thing: Internet and Computers! Screw the geothermal unit and hot water, screw the fridge. WE WANT INFINITE PAWAAAAAR!

Comment Re:Live+2+1 redundancy (Score 1) 680

HAHAHAH! Don't feed the troll :) But who said:
- my camera in 2004 took 20M pics? It went from 2MB JPEGS to 8MB RAW to 15MB RAW to 20MB RAW over the years.
- I don't have OTHER means to keep older pictures
- there's a real reason to keep very very old "crappy" pictures (1-2 years ago), and not only the good ones.

But obviously, the fact I didn't write a 3-tome novel with graphs and an accompanying DVD with examples of the process seems to be a problem :)

I once filled the full 750GB with a 1-week long event, taking a full 64GB load every night (before you call me bonkers on my math skills, consider there's intermediate files, work copies, and that I still need to keep the good ones from before anyways)

Slashdot Top Deals

For God's sake, stop researching for a while and begin to think!

Working...