Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Can't get good sound on RPi. Power problems. (Score 1) 202

by dannycim (#41121261) Attached to: Serious Problems With USB and Ethernet On the Raspberry Pi

Not "sound server" in the sense that you mean.

One of the boards is to be tasked with replacing an aging automatic player in a interactive information booth, the other would replace two loop players in localized FM broadcasts, for the visitors to a tourist centre.

They'd serve sound to the public, not to network clients. But that still makes them servers.

Comment: Can't get good sound on RPi. Power problems. (Score 4, Informative) 202

by dannycim (#41116889) Attached to: Serious Problems With USB and Ethernet On the Raspberry Pi

I bought two Raspberry Pi(es) to use as audio servers and have been disappointed by the sound quality. The on-board audio out's DSP has limited bandwidth so sound is down-sampled to 11 bits. Scratchy. It's not advertised so that was a let-down.

Using a USB AUDIO dongle is no-go either, because of the crappy USB drivers. Stutters non-stop. Here are oscilloscope grabs of two music samples and a 1Khz tone: The flat parts shouldn't be there. The only way to get good sound now is to use rather expensive USB soundboards or the HDMI output, but extracting line-level audio signals from that isn't a simple or cheap proposition.

The power design should be re-thought. If you power your Pi with exactly 5 volts, the voltage drop in the polyfuses causes early failures if you connect peripherals that have medium current demands. If you're lucky your power adapter might supply a bit more than 5 volts (5.25 is nice) and you might not experience too many problems. Me, I've soldered supply wires to test points T1(vcc) and T2(gnd) and bypassed the fuses completely.

I hope they come up with another revision, add a Low-drop-out regulator (+$2) and figure out the USB naggies.

Until then, caveat emptor.

Comment: Something strange in the article. (Score 2) 68

by dannycim (#37303106) Attached to: 18-Year-Old Student Discovers Comet Break-Up

FTA: Fragmentation in comets is rarely observed, but can occur when they are closest to the sun and develop spectacular tales of gas, dust and ice particles. The tale originates from the icy core (or nucleus), so when it heats up, vapor from sublimating ices are outgassed into space, dislodging dust and other material.

Shouldn't that be "tails" and "tail", or some different definition of the word "tale" I wasn't previously aware of?

Comment: 50 years later... This is humbling. (Score 1) 175

by dannycim (#35354482) Attached to: Futureproofing Artifacts: Spacewar! 1962 In HTML5

You know, after working on my own long term project (25 years between updates), which Spacewar over-shadows by a factor of two, I've realized that code I write now, no matter how trivial, may be read back a long time afterward. And since I'm a very sloppy programmer, this is is embarrassing on a large scale.

Oh well. if you're curious.

Comment: Re:They Why ZFS? (Score 1) 235

by dannycim (#34311710) Attached to: Running ZFS Natively On Linux Slower Than Btrfs

XFS is extremely prone to data corruption if the system goes down uncleanly for any reason. We may strive for nine nines, but stuff still happens. A power failure on a large XFS volume is almost guaranteed to lead to truncated files and general lost data. Not so on ZFS.

[Citation Needed] as wikipedia would say. XFS is no more prone to data corruption than any other journalled filesystem in the event of unexpected halts.

You should see the fireworks I got on Solaris 10 while I was running a script that did a bunch of zpool commands just as the power went out. Borked everything.

I love ZFS, but I'm not deluded into thinking it's magic.

Comment: Luxury! (Score 1) 397

by dannycim (#34221484) Attached to: Auto Industry's Fastest Processor Is 128Mhz

Luxury. We used to have to get out of the lake at six o'clock in the morning, clean the lake, eat a handful of gravel, work twenty hour day at mill for tuppence a month, come home, and Dad would thrash us to sleep with a broken bottle, if we were lucky!

Meanwhile I'm working on a micro-controller project that runs at 500Hz (not kilo, just hertz).

If you keep the code tight and hand-craft it, 128Mhz is blindingly fast.

Comment: Here's how I'd do it. (Score 3, Insightful) 332

by dannycim (#34183628) Attached to: Sophos Researcher Suggests Password 'Free' to Spur Wi-Fi Encryption

1. Bring laptop with extra WiFi dongle into a public area.
2. Connect to Free WiFi spot using internal nic.
3. Act as an Access Point on second nic with a cooler sounding SSID.
4. NAT traffic to first WiFi net and grab everything of interest.
5. ???
6. Profit!!!1!!ONE!

Comment: Easy problem to understand, hard to fix. (Score 1) 472

by dannycim (#33998764) Attached to: The State of Linux IO Scheduling For the Desktop?

The problem is this: Let's say you do an action that reads 5 blocks on the disk. While the system is idle it has nothing else to do so your 5 blocks are read immediately, super fast.

While the system is doing some other I/O intensive job, it might be doing 500 block reads at the same time. Everything goes in the same queue, so your task is only %1 of the requests that have to be done in a set time. Result: Your task takes 100 times longer.

This is the problem that all the scheduler are trying to solve: trying to be fair so that every task gets a reasonable share of priority, while keeping performance at an optimum level.

For example, some O/S researchers have tried to implemented multiple-tiered system where every I/O is tagged with flags that indicate if the call came from an interactive user action, or was generated by non-interactive jobs (daemons, lower-level layers, etc...) and then give higher priority to the user requests. Two problems with that approach is it can be very hard to differentiate the two and that any heavy user task may prevent system tasks to work in a timely fashion and the user tasks may depend on the system tasks to complete their jobs in order to proceed; vicious circles and race conditions.

I'm glad I'm not trying to code a kernel scheduler, they're very hard problems and figuring one out that can be fair for all types of uses is nigh impossible.

The great thing about the open source O/Ss is that everything's done in the open, there's intense discussions going on about in the field, and there's multiple solutions being worked on and tested.

To me, Linux has always felt like it gave much higher priority to I/O than the "user experience". It's something I've come to expect. If I copy gigabytes from a disk set to another I gladly accept that my web browser's going to be sluggish for a time, all the while feeling content that at least it's going to be done so efficiently that it's going to last for the shortest amount of time possible.

Other O/Ss that I won't name may "feel" better, but have nowhere near the same I/O throughput that Linux has.

panic: kernel trap (ignored)