Forgot your password?
typodupeerror

Comment: Re:It was pretty cool in its day (Score 2) 192

by ardor (#47501107) Attached to: The Almost Forgotten Story of the Amiga 2000

For gaming, why not just run the PC-versions of Dune, Monkey Island and Settlers? They aren't exactly the same but is the difference really that important?

One reason is that back then, many DOS versions of games only had AdLib or even just PC speaker support, while the Amiga version came with fully digitized audio.
Compare https://www.youtube.com/watch?... to https://www.youtube.com/watch?... , https://www.youtube.com/watch?... . (The sound is actually in stereo, but the Youtube videos are downmixed for some reason.) The difference can be jarring thanks to the inferior music.

As a matter of fact, the Amiga was famous for its sound (and graphics) capabilities back then, *especially* compared to DOS. (Well, until 256-color VGA became common..)

Some DOS games had digitized audio as well, but Amiga could mix 4 channels in hardware. Mixing had to be done in software in DOS, unless you had something like a GUS or an AWE32 sound card.

Comment: Re:Max RAM? (Score 1) 353

I actually cannot open more than 4-5 tabs on my old Thinkpad with 2 GB RAM because it will be filled quickly. This is just Chromium and icewm running, on a bare minimum Archlinux installation. More than 4-5 tabs, and disk activity skyrockets, because the disk cache is pretty much zero at this point. And this affects the system twice - first, any other I/O activity will be slower because of almost zero disk cache, and second, Chromium itself suffers, because it too does a lot of I/O operations (more reads than writes).

20+ tabs on that machine is doable, sure, if you are willing to tolerate a slideshow..

Comment: Re:Max RAM? (Score 1) 353

by ardor (#46655761) Attached to: An SSD for Your Current Computer May Save the Cost of a New One (Video)

This is quickly becoming untrue. The no.1 memory hog these days is the browser. People keep 20+ tabs open, many of them filled with tons of fancy graphics and complex structures. The virtual memory WILL become active. And then everything is incredibly slow.

Also, even if you measure that your browser, OS etc. consume say 3 out of 4 GB, do not forget about disk cache. It is *crucial*. Plenty of free RAM means that a lot of files can be cached in memory, which helps immensely.

And, since RAM has become rather cheap these days, I'd try to max out my mainboard RAM capacity for one simple reason: DDR generations come and go. Today you might think "oh that is lots of RAM". Tomorrow you will be glad you got that much back then, because that DDR generation is obsolete now, and remaining chips are expensive. To get an example, just try to find DDR2 RAM to upgrade an old PC...

Comment: Re:"What the internet was designed for" (Score 1) 361

by ardor (#46281587) Attached to: Killing Net Neutrality Could Be Good For You

"Soft realtime" means it operates in real-time, able to land exact guarantees within given time constraints reliably; "Hard realtime" means it's able to make 100% perfect guarantees within given time constraints, and absolutely will not miss those guarantees. "Soft realtime" requirement means your shit only operates if realtime works, but it's okay if realtime fails--it can recover, or if it flat out fails out it can be restarted with no harm. "Hard realtime" requirements mean a failure is CATASTROPHIC and has SEVERE CONSEQUENCES.

Okay, I admit I forgot about one thing: the definition of hard- and soft-realtime can be context specific. In the multimedia world, soft realtime would refer to something like realtime 3D rendering in games and visualizations, where a frame does not strictly have to finish rendering until a certain deadline is met; instead, there is a tolerance range for the interval of each rendering (and for how fast and how much this interval changes over time). Audio is hard realtime, because even one missed sample is an error (in case of capturing, arguably fatal, because it can mean the entire recording has to be thrown away). In your definition, all of them would be lumped together in the "soft realtime" category, which is not really useful.

Comment: Re:"What the internet was designed for" (Score 1) 361

by ardor (#46277221) Attached to: Killing Net Neutrality Could Be Good For You

Sure, it is nice to see that these issues have been acknowledged and are being fixed. But that proves the point, doesn't it? If the internet were easily and perfectly capable of real-time HD video delivery, these fixes wouldn't be necessary, would they?

In practice, I think it will unfortunately take years until real-time constraints are supported by consumer-grade network stacks.

Comment: Re:"What the internet was designed for" (Score 1) 361

by ardor (#46277131) Attached to: Killing Net Neutrality Could Be Good For You

Debatable. On one hand, a dropped/lost frame does not bring down the system, and is not apparent until it happens often. On the other hand, it *is* an error. I guess it is more hard-realtime-ish when it comes to capturing video, since then, the video data is actually incomplete.

Audio is definitely hard-realtime for me. Even a few lost samples are immediately noticed and usually are unacceptable. Capturing is very tough because of additional low-latency constraints.

Comment: Re:ROTFL you said it best - it's allowed, not happ (Score 1) 361

by ardor (#46275029) Attached to: Killing Net Neutrality Could Be Good For You

They kind of did back then. Remember, in the past, you usually logged into something like CompuServe, which acted as an Internet portal, but also contained its own network.

But I think the simple reason net neutrality wasn't such a hot topic was because the internet just wasn't big enough yet. It is hard to compare the internet from the 80s with the one from today. In the 80s it was still very research and university/DARPA centric. Today.... not so much.

Comment: Re:"What the internet was designed for" (Score 4, Informative) 361

by ardor (#46274995) Attached to: Killing Net Neutrality Could Be Good For You

Streaming video is easier than downloading large programs, as you only need to ship a certain amount per second, rather than ship it all and only be able to use it when the last byte has arrived. For real-time broadcast, which causes massive numbers of synchronized transfers, you can use multicast directely, as well as to "prime" a content delivery network node close to your particular edge.

Uh, no, it is not easier. I say that as somebody who has been developing audio and video delivery software. The requirements differ significantly. Most network gear out there is optimized to maximize throughput, which is *not* what you want for video. For video, you want deliver on-time. This affects the kind of buffering used in hard- and software. Given the many different sources of latency over a WAN, the real-time constraints of video playback cannot be met unless you use a big jitter buffer. How big? Well, here is where the difficulties start.

Multicast does not solve that problem. All it solves is scalability (which is nice). But not the real-time constraint. BTW, if you wish to distribute video over Wi-Fi, you might be surprised to find out that many unicasts streams are better than one multicast one, thanks to Wi-Fi specific issues.

(And yes, video playback is a case for real-time programming. Real-time simply means that a given task has to be finished before a specific deadline is passed, in this case, the next frame has to be shown on screen until its timeslice passed. It does not necessarily mean that it must be something that happens many times per second.)

I do think this case against net neutrality is bollocks though.

Comment: Re:good for headless usage? (Score 1) 197

by ardor (#44759441) Attached to: Tiny $45 Cubic Mini-PC Supports Android and Linux

But then, you transmit OpenGL commands as some sort of command stream to the client, which contains the GPU that renders the command stream locally.
The other way is to render with the GPU on the headless machine. Then you need to encode the rendered frames and transmit them to the client on the fly. You could also use VNC, RDP etc.

panic: kernel trap (ignored)

Working...