Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:"What the internet was designed for" (Score 1) 361

"Soft realtime" means it operates in real-time, able to land exact guarantees within given time constraints reliably; "Hard realtime" means it's able to make 100% perfect guarantees within given time constraints, and absolutely will not miss those guarantees. "Soft realtime" requirement means your shit only operates if realtime works, but it's okay if realtime fails--it can recover, or if it flat out fails out it can be restarted with no harm. "Hard realtime" requirements mean a failure is CATASTROPHIC and has SEVERE CONSEQUENCES.

Okay, I admit I forgot about one thing: the definition of hard- and soft-realtime can be context specific. In the multimedia world, soft realtime would refer to something like realtime 3D rendering in games and visualizations, where a frame does not strictly have to finish rendering until a certain deadline is met; instead, there is a tolerance range for the interval of each rendering (and for how fast and how much this interval changes over time). Audio is hard realtime, because even one missed sample is an error (in case of capturing, arguably fatal, because it can mean the entire recording has to be thrown away). In your definition, all of them would be lumped together in the "soft realtime" category, which is not really useful.

Comment Re:"What the internet was designed for" (Score 1) 361

Sure, it is nice to see that these issues have been acknowledged and are being fixed. But that proves the point, doesn't it? If the internet were easily and perfectly capable of real-time HD video delivery, these fixes wouldn't be necessary, would they?

In practice, I think it will unfortunately take years until real-time constraints are supported by consumer-grade network stacks.

Comment Re:"What the internet was designed for" (Score 1) 361

Debatable. On one hand, a dropped/lost frame does not bring down the system, and is not apparent until it happens often. On the other hand, it *is* an error. I guess it is more hard-realtime-ish when it comes to capturing video, since then, the video data is actually incomplete.

Audio is definitely hard-realtime for me. Even a few lost samples are immediately noticed and usually are unacceptable. Capturing is very tough because of additional low-latency constraints.

Comment Re:ROTFL you said it best - it's allowed, not happ (Score 1) 361

They kind of did back then. Remember, in the past, you usually logged into something like CompuServe, which acted as an Internet portal, but also contained its own network.

But I think the simple reason net neutrality wasn't such a hot topic was because the internet just wasn't big enough yet. It is hard to compare the internet from the 80s with the one from today. In the 80s it was still very research and university/DARPA centric. Today.... not so much.

Comment Re:"What the internet was designed for" (Score 4, Informative) 361

Streaming video is easier than downloading large programs, as you only need to ship a certain amount per second, rather than ship it all and only be able to use it when the last byte has arrived. For real-time broadcast, which causes massive numbers of synchronized transfers, you can use multicast directely, as well as to "prime" a content delivery network node close to your particular edge.

Uh, no, it is not easier. I say that as somebody who has been developing audio and video delivery software. The requirements differ significantly. Most network gear out there is optimized to maximize throughput, which is *not* what you want for video. For video, you want deliver on-time. This affects the kind of buffering used in hard- and software. Given the many different sources of latency over a WAN, the real-time constraints of video playback cannot be met unless you use a big jitter buffer. How big? Well, here is where the difficulties start.

Multicast does not solve that problem. All it solves is scalability (which is nice). But not the real-time constraint. BTW, if you wish to distribute video over Wi-Fi, you might be surprised to find out that many unicasts streams are better than one multicast one, thanks to Wi-Fi specific issues.

(And yes, video playback is a case for real-time programming. Real-time simply means that a given task has to be finished before a specific deadline is passed, in this case, the next frame has to be shown on screen until its timeslice passed. It does not necessarily mean that it must be something that happens many times per second.)

I do think this case against net neutrality is bollocks though.

Comment Re:good for headless usage? (Score 1) 197

But then, you transmit OpenGL commands as some sort of command stream to the client, which contains the GPU that renders the command stream locally.
The other way is to render with the GPU on the headless machine. Then you need to encode the rendered frames and transmit them to the client on the fly. You could also use VNC, RDP etc.

Comment Re:Which GPU? Which GPU drivers? (Score 2) 197

The i.MX6 inside uses a Vivante GPU. Vivante drivers work rather well, but for some reason, that company can't version their drivers, which is annoying. However, Freescale takes care of this. When working on Sabre SD boards, I always had stable OpenGL ES and OpenVG support. Newest Vivante drivers even support desktop OpenGL (only 2.1 though).

There is also an opensource driver project called etnaviv https://github.com/laanwj/etna_viv it has come pretty far. People have been running GLQuake and others with it already.

Comment Re:MythTV / Multimedia Frontend (Score 1) 197

I have been playing around with the Freescale VPU. It is very powerful, can do 1080p easily, Linux support is solid. It can also encode in hardware. Supported formats I know of are: h264, mpeg-1/2/4, vp8, vc-1,wmv3,mjpeg. I think h263 too, not sure though. It also has deinterlacing and hardware scaling and color space conversion capabilities (think YUV->RGB).

No VDPAU support. But VDPAU is nVidia only. You probably meant VA-API. I do not know if this is supported. There are GStreamer plugins for it, XBMC and libav/ffmpeg support is being worked on IIRC.

Comment Re:Check those numbers (Score 1) 197

Um, a RPi is much less powerful than even the $45 model.
The ChromeCast uses a Marvell SoC. Marvell is notoriously uncooperative when it comes to documentation and details about their hardware, unless you are Google. (So is Broadcom btw.)
Freescale is much more open and forthcoming.

This one combines eSata with gbit ethernet (limited to 470 Mbit though, yes) and a pretty powerful video engine. Seems very nice as a DVR/HTPC combo, and/or a box for transcoding media.

Comment Re:How much RAM? (Score 4, Informative) 197

Same problem as this model. The Gigabit is limited to 480Mbps (USB 2.0 bus speed). Actually this Cubic isn't all that different from an RPi, they run the same family chips, the same type of RAM, the same type of I/O.

Not true. Ethernet does not go through USB here; it is connected to the SoC directly. See http://boundarydevices.com/i-mx6-ethernet/ . The Raspberry Pi uses a BCM 2835 from Broadcom, while the Cubox-i uses a Freescale i.MX6 , so they are not the same chip family, they aren't even made by the same company. Raspberry Pi also does not have eSata, while the CuBox-i.

Comment An analogy (Score 5, Insightful) 124

Reminds me of software bugs which are "fixed" by disabling subsystems around them. Example: in a media player, AAC playback sometimes freezes and causes glitches. Solution: disable AAC playback, ensuring that the media player does not reach this undefined and broken state.

Slashdot Top Deals

Kleeneness is next to Godelness.

Working...