Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:use Ethernet - decoding wrong place (Score 1) 357

If you say so; a chip whose application I've never seen in ANY consumer device anywhere (and has been available for at least 2 years) isn't going to "be embedded in gizmos anyways." I'm not questioning that real-time encoding hardware exists, just the cost. What's the cost of this device, and what's the cost of placing it in every relevant box you own? What's the cost of replacing every TV with models capable of being upgraded properly to deal with this system, including the new codecs?

Now what's the cost of doing that instead of handling what you're describing with a single, inexpensive set top box that can be more easily replaced/upgraded/installed, and outputs to a television as a monitor. I don't think we're arguing for different things here, necessarily; I want ubiquitous network access to a television as much as you do, but I disagree that there's a price savings to be had in embedding compression hardware (not to mention the lock in and backwards compatibility nightmares that it brings) in every device.

Network tuners already exist. Network surveillance cameras already exist. They can both already write to NAS drives without any trouble, and are perfectly capable of being controlled over a network. Set top boxes can already software switch inputs, poll for meta-data for movies being played, access information on other STBs, communicate with other networked devices.

A $200 Popcorn Hour Set Top Box is already hardware capable of doing 9/10s of what you listed (the multi-display game system excluded, although advances in STB hardware would certainly allow the game system to act as a server and render those images on the STB). The same can be done in any recent model digital cable or satellite box. An STB communicates with network devices over the network and communicates with displays over HDMI, it avoids the need for encoding video when it isn't necessary, and when technology has progressed far enough that things need to be changed, it can be replaced much more easily. No need to fret about firmware upgradeable television to support a new compression format, or developing new compression formats to deal with the problems inherent in your system, because you treat the display as a display and leverage the economies of scale on a box that's more easily replaceable. Arguing that packaging the STB in the TV is cheaper is missing the forest for the trees. Again, it works great for prerecorded content, but falls apart for real-time video where the need to maintain upgradeability, compatibility, and cost savings is sorely trounced by a cheap external box.

Comment Re:use Ethernet - decoding wrong place (Score 1) 357

The HDMI switch has an "original price" of $250 in the same sense that a car has a sticker price. Monoprice is not a clearance site; if you want to pay $150 to someone else, you're free to do so.

HDMI does indeed support addressing. You may want to read up on the standard before you begin claiming things that aren't true. Any HDMI device can communicate with any other over the HDMI system, although no one has built a switch that handles things in such a manner because there's no demand for such a product (as I'll explain in a bit).

I didn't go looking for an industrial part because there's no pricing available. There never is for products where the purchaser is expected to buy in lots of 1,000. The conexant chips you linked have A) no price listed and B) don't support HD resolution, so they're a moot point anyway.

What I did link was a product with the necessary industrial hardware buried inside. If we ignore the D/A converter (those are dirt cheap anyway), the USB chipset, and the fan (yes, fan) necessary to keep the compression hardware cool, then we can make a reasonable approximation of the cost of the HD compression hardware inside. If you can find an HD capable industrial part, including the price, please feel free to link it.

By the way, compression chips don't care if they're taking digital input from a source that was originally digital or digital input from a D/A converter. It's already digital by the time it hits the compression hardware. There might be a tiny, miniscule hit in speed due to the digital video from the D/A converter having slight imperfections, but it's pretty nominal.

Here we have a $200 box. Let's say that $100 of it is due to the unnecessary D/A equipment, USB hardware, and the enclosure itself. Hell, I'll be super generous and say that only $50 of that device is the actual compression chip. Now you're telling me that we should have a $50 encoder chip in a $200 Xbox 360 or a $400 PS3? Let's say that the cost for those chips drops 10 fold due to economy of scale. $5 is still a ton of lost profit on devices that are known for being sold at a loss. And that's all for an encoder chip capable of doing 1080p @ 30fps. What about 60 fps? What about 120 fps or 240 fps for the newer 120hz/240hz panels? What about the problems introduced when you run MPEG-2 compression on the text of the website I'm attempting to read with my networked video device?

On top of that, you still haven't addressed the problem of poor video compression, the reasons why we're even bothering to introduce lossy compression to an uncompressed video output signal in the first place, or the issue of codec stagnation when we're dealing with everything being transcoded to MPEG-2 in the end.

Finally, I still can't work out the problem you're trying to solve in the first place. You seem so intent on comparing a video output signal to a network cable that you can't tell me WHY you want to do this. You have a vague requirement of routing inputs to outputs, but we already have a system in place for that. Commercially available set top boxes such as the Apple TV, Popcorn Hour, and similar have been around for several years, and homebrew solutions have been around for longer than that. They all allow us to use existing home networks to send only the compressed video that's necessary, then layer their interface on top before outputting with regular video outputs. Got a video on device A but want to play it on TV B? We can just send the video file over an existing network. You want us jumping through hoops to decode the file at Device A (remember, it might not be in MPEG-2 or whatever format your system uses), layer our interface on top, re-encode to an MPEG-2 Transport Stream format, then send it to TV B. Or, we can have an inexpensive set top box at TV B, capable of handling any input format and without the necessary hardware to encode back to Transport Stream format. Then we just dump the framebuffer video out to a TV.

I won't lie, your system sounds incredibly appealing except for the edge cases. That's why it already exists. DTVs have been capable of doing EXACTLY what you're describing for at least 6 or 7 years over firewire. A standard exists to allow MPEG-2 Transport Streams over firewire from any device to any device, and it even supports simplistic menu systems. Hell, there's even a standard called that allows for a DTV to dump a tuned signal out to a firewire hard drive (provided the drive enclosure can speak the protocol necessary) for use as a tremendously inexpensive DVR. There's even a company that sells software that causes your computer to emulate the AVCHD system, and I'm sure someone could implement an OSS solution in a couple of weeks if they were bored and resourceful. Because there's plenty of bandwidth left on the firewire stream, it could even support DRM for the Hollywood crew. Firewire protocol over ethernet cabling is trivial to implement, and involves not much more than dropping in a box with a firewire port on one side and an ethernet port on the other.

It never caught on because it fails at the edge cases. Game systems require encoding hardware, computers require encoding hardware. Video transmitted in a format other than the MPEG-2 TS (like, say, everything except OTA DTV) requires decoding and encoding hardware. Face it: you want your television to be a thin client rather than a monitor, except instead of one central server you want every device to be capable of acting as a server. The minor upside (it's awesome and cheap for prerecorded video in MPEG-2 TS format) is completely outweighed by the myriad of downsides (it requires extra, expensive hardware for any device that isn't doing prerecorded video in MPEG-2 TS format, and it has poorer image quality for real-time video).

Comment Re:So? (Score 3, Informative) 182

XP-x64 is really Windows Server 2003 with the XP appearance tacked on top. It's a fine OS, but it's also an orphaned child that's often left aside. It was cooked up as a temporary stop-gap until Vista64, and it served its stopgap purpose.

Drivers are non-existent for some pieces of hardware. Pretty much any hardware needs to have XP and Vista drivers, but XP-x64 isn't actually XP (it requires 64-bit drivers), so the drivers aren't necessarily a drop-in replacement. With the release of Vista-64 as Microsoft's 64-bit desktop OS, XP-x64 is also a complete dead-end in the driver department; new hardware comes out, and since Vista64 and Windows Server 2008 already exist, there's not as much reason for companies to bother with driver support for XP-x64. It's not worth the testing or support resources necessary for an OS that only ever commanded a tiny fraction of the market. On top of that, plenty of install applications fail because they check for XP or Vista but not XP-x64; even though the program will run, it can't be installed without some irritating workarounds.

On top of that, his IT department may be unwilling to dedicate the resources necessary to maintaining one or a few workstations with a totally different OS and image than the rest of the systems. You may argue that it's IT's job to do that, but they also need to weigh costs and benefits; perhaps they've already determined that the hardware or critical software isn't supported under XP-x64, or perhaps they're about to migrate to Win7 and it simply isn't worth the extra cost and hassle until they start migrating people in 9 months.

Comment Re:use Ethernet - decoding wrong place (Score 1) 357

Er, I bought 3 10 foot HDMI cables, terminated, with ferrite beads, for $10. While I can order 3 10 foot ethernet cables terminated for a bit cheaper (probably $6 or so), that's about it. Denon sells a $500 ethernet cable, but that doesn't mean that ethernet cabling on the whole is that expensive; don't use Monster Cable as your pricing comparison.

Here is an HDMI swithc with 8 ports for $78 (provided you buy in bulk, else it's $87). A 4 port switch from the same site is $30. HDMI cable runs are possible out to 50 feet before they start needing signal repeaters, but the people who require more than that are honestly so far at the edge that it's not worth building a standard to them.

Gaming consoles already have hardware in there to do all sorts of graphics operations in hardware. MPEG encoding is right up their alley.

I'm not sure if you're familiar with the concept of encoding vs. decoding 1080p MPEG-2 video. There's an order of magnitude difference in computational power required to encode. Gaming consoles do not have that hardware, period. Just because you claim it's right up their alley does not make it so. To put it in perspective, this is an encoder capable of real-time compression of HD signals, and a lousy one at that. Encoders capable of doing proper, broadcast quality encoding in real-time cost thousands of dollars. Anything less than that is going to look like a crappy web video, grainy, pixealted, and washed-out for no other purpose than to solve your imaginary problem. Instead of dumping the framebuffer out to a monitor, you want to dump it to an expensive piece of hardware, encode it, transmit it a few a feet, and decode it. Why? Seriously?

On top of all of that, we'll have video quality issues to contend with, as brand X will use cheaper chips that result in more macroblocking or washed colors on playback. It'll be like the era when we used to have to care about what RAMDAC various video card manufacturers are using because certain ones resulted in visibly worse picture quality. We already have to deal with this to a lesser degree with decoding chips, but now you want to up the ante.

And what happens when we want to use a better compression format to improve picture quality at the same bitrate (such as MPEG-4). Too bad! We're going to transcode to MPEG-2 anyway, so don't waste your time.

Let's face it: A television is a monitor. HDMI is DVI video plus audio. You're conjuring up a ridiculous solution to a problem no one has, your solution costs more, and produces lower quality video as a bonus.

Comment Re:Early adopters (Score 1) 262

Just a heads-up: The PS3 is far from the cheapest Blu-Ray player on the market. The base 80GB model costs $400 and the step up 160GB model goes for $500, while there are multiple players available for $200 with the average cost for Sony and Panasonic players appearing to hover in the $300 range. The PS3 is available for as low as $300 with the use of various coupons or promotions that show up on common deal forums, but it's still at least $100 more than a dedicated player.

You can debate about the relative merits of some of the "generic" brands in that list, but Sharp and Samsung are certainly reputable electronics manufacturers. I won't dispute your other points; the HD-DVD players were in most respects superior to BluRay players at the time. Now that things have caught up and the prices have dropped substantially, though, I'm as glad as you are that the format with superior capacity will win in the end.

Comment Re:use Ethernet - decoding wrong place (Score 1) 357

So, what you're calling for is that every single video device out there should have an MPEG-2 encoder capable of rendering (in some cases) 1920x1080p MPEG-2 video in real time (in some cases at 120 fps), all the time, in every situation.

That means the on-screen menu in your HD cable or satellite box: encode MPEG-2 in real-time.

Video Game system or computer display: Encode MPEG-2 at 1920x1080 in real-time.

DVD Player/Blu-Ray player: Either pre-render render every single possible menu combination as an MPEG-2 video and have pressing down on the remote trigger the video where "languages" is selected, or encode MPEG-2 video in real-time.

What we need for nearly everything other than an optical disc player (since the menu problem can be easily and trivially solved there) is an MPEG-2 encoder capable of handling real-time compression and transmission in resolutions all the way up to 1920x1080. That's pricey hardware. While the economy of scale would drive it down to a decent degree, it's still way more complex than is necessary, given that we've already worked out a spectacular alternative.

Of course, you can reasonably argue that all we'd need is a built-in mechanism for guide systems (which already exists for DTVs) and extend that to allow for menus from DVD/Blu-Ray/Cable Box. It still doesn't solve the problem of the game system or computer/TV convergence. Nor does your external box for transcoding video (ostensibly for legacy devices but for game systems/computer); why not just plug that external video straight into the TV, like we already do?

Yeah, I don't see that being a cost-effective approach to the problem. Storing actual pre-recorded video is trivial, as you've pointed out. Generating anything in real-time is much, much more difficult. It's far easier to use a standard based on computer monitors that allows for inexpensive cabling and switching methods.

Comment Professional Degree (Score 3, Interesting) 372

Your motivation appears to be purely focused toward employment and earnings (not that there's anything wrong with that). As such, I'd have to advise against graduate studies in CS or similar. While they don't have to be theoretical - Master's degrees offer a lot more flexibility in this department than PhDs - they are still focused at their core on contributing to the common knowledge. You're probably better off with a masters or doctorate that falls into the category often described as professional degrees: things such as MDs, Law degrees, MBAs, etc.

You've mentioned an MBA. It's too early for that; while it's certainly not a hard and fast rule, the general consensus is that an MBA works much better after you've been in industry for a few years. You'll be better equipped to discuss and apply the relevant ideas when you know how things work "in the real world." On top of that recommendation, it's important to realize that MBAs have literally become the new "dime a dozen" degree. As the popularity of the degree exploded, every commuter school and online university has begun offering them. Without stooping to elitism (I'm sure the education is sufficient), you risk entering a glutted field with a less than stellar name on your diploma. That's a bad way to make a stack of money and a 2-ish year time sink worthwhile. If you decide on an MBA, you should work for 3 or 4 years, then aim to obtain your MBA from one of the top 40 or so schools. Again, I'm not saying that you'll get a sub-par education or won't succeed with an MBA from tier-3 State U, but it will be more difficult to stand out from a crowd waving MBAs from the big names.

With all that said, may I recommend pursuing graduate studies related to health informatics? At it's simplest level, it's a practical and always-necessary application of CS to the medical field. With the current push from the Obama administration for Electronic Medical Records and the enormous flow of government money sure to follow, it's likely to be an enormous growth industry in the coming years. The basic ideas about DB structure and interface are translatable to other industries if you ever need to leave. Health Informatics-focused graduate programs are available through some Business schools as a hybrid of MIS studies and through the bigger Health Science schools as their own degrees or as specialized variations of Health Administration degrees.

Comment Just go back to the grandparents (Score 1) 129

The granddaddy of them all (well, really two) have a lot to teach us: Resident Evil on the consoles and Alone in the Dark on the PC. There are really only two things that matter: the camera and the resources.

Camera: we can't be allowed to see everything. Horror movies exploit this by giving us a limited range of view, setting the movie at night, etc. RE and Alone in the Dark didn't always let us see everything; an enemy would be hiding off screen and we could only just hear them. That works. It taps into deep-seated fears in humans, the very same fears that made us hate the dark when we were small children.

Even Resident Evil 4, for all of its full 3D without pre-rendered backgrounds managed to do this right. The over-the-shoulder view caused your main character to obscure a good part of the screen. You had to stop, look around, make sure things were clear, and continue. You'd sometimes hear a sound and not see that someone was obscured by your character. In an FPS, not seeing everything would be infuriating, but it's part of the horror genre.

Limited resources: We can't be a super solider. We can't have a machine gun, we can't have a rocket launcher, and we probably shouldn't even have enough bullets for our pistol (yes, I realize many survival horror games have had all of those elements, but they're generally late in the game or included as a plot point rather than a weapon). Every encounter is supposed to be a balancing act-enemies can be killed nearly instantly with a headshot, but it requires taking time to line up the shot. Alternately, you can pump them full of easy body shots, but use 3x-6x as many bullets. While you're trying to decide what to do, the enemy is approaching. The sinking feeling you get when you hear an enemy and realize that you only have 3 bullets to deal with them is far more horrific than anything that ever appeared in Doom 3.

Meanwhile, most games won't even let you carry that many weapons; you're generally limited to a pistol and a larger weapon until later in the games. In Resident Evil, for example, even if you do manage to pick up the powerful rifle or big .357, you have to choose if you're willing to give up the group-clearing shotgun to use it.

Along the same lines as resources: your character can't be indestructible. An enemy or two, particularly bunched up, SHOULD have the ability to kill or at least seriously injure you. Having no bullets doesn't matter if you can easily tough it out through a swarm.

The type of enemy doesn't really matter. I'm sure a suitable game could be made with vampires or crazy people or werewolves or even enemy soldiers if it was done correctly. As long as game creators use a horror-inspired camera and limit your resources, you're on track to a good horror movie.

Finally note: What many people have said about FEAR, Left4Dead, etc. is true. They are not horror games. Rather, they are FPS games with monsters instead of soldiers. If you doubt it, then just run them through the camera/resources test. Shooting 100 zombies from an over-the-barrel view is the functional opposite of a horror game.

Comment Re:I stopped reading... (Score 1) 459

Er, isn't your comment simply telling me I'm correct. To paraphrase: "The cut-off exists, but some people who don't meet the cut-off don't think they can afford it." Starting from $0, you can pay half of the cost of an Ivy League education based purely on government loans alone. Throw in private education loans (a bad idea, but available), student work, and family contributions and it can be done.

But how much do these people you know (or their families) earn? The $60,000 (it was $40,000 until last year or the year before) cut-off is intended to attract those from lower-middle class and poor families. It is not intended to allow a hassle- and pressure-free funding source for the upper fifth in income. Admittedly, any "bright line" policy fails at the fringes; those families who make $65,000 are probably no better equipped to send a child to Harvard. However, the $60,000 policy certainly instills more confidence than a vague and ambiguous "we help those with difficult financial situations," or something similar. I would hope and expect that for a student on the very fringes, scholarships and government grants would take up much of the slack. On the flip-side, I would hope that any family in a situation on the very fringes would examine how they might qualify - a simple IRA, 401(k), or 529 investment would drop AGI down to the range necessary to qualify their children.

If your family makes $100,000 per year, then you don't qualify. If you truly want to attend any of those schools, it can be done; conversely, doing so may be a bad choice financially because it will require enormous sacrifice on the part of the student and his or her family. Moving to a smaller house, investing money each year from birth, forgoing many luxuries, and so on is not necessarily the sort of situation many middle and upper-middle class families are excited about. A family pulling $100,000 can certainly send a child to Harvard, but it will require some serious sacrifices (and planning ahead).

Conversely, the federal government already provides plenty of excellent opportunities for families in those situations. 529 plans allow for completely sheltered investment, which for someone in the upper two fifths of the income is the functional equivalent of a 25%-35% match on money before we even consider any returns on the invested money. Further, the government offers a well-designed federally-subsidized student loan system that will cover the instant costs and allow for a wide range of repayment options based on income after graduation.

Bottom line: three fifths of the United States is sub-$60,000 (the cut is $55,000 for the middle quintile). There is some overlap in the fourth quintile, which runs to $88,000, and that is the only group that really gets caught. It's likely that many in that group won't qualify for grants and it's entirely possible that the family is unable to sock away enough through a 529 to make it work. A person in this position COULD attend an Ivy if they really wanted, but it would require a hefty sacrifice on both the student's part and that of his or her family. If there's any room for criticism in this argument, it's that such schools could effectively cover all ranges if they upped their cut-off to $80,000. Of course, the Harvard policy is specifically intended to pull those from the bottom tier of US income, not to make things affordable for the middle class. It's not billed as an "anyone can come here" so much as a "we want financial diversity in the form of lower income students and this is how we will accomplish that goal."

For someone in the upper quintile, though, there's not much sympathy. 529 plans will support the funding and subsidized loans are available to take up any of the slack for those in the lower range of that quintile. $300 a month (actually $400, but sheltered from tax) invested per month will provide ~$100,000 at age 18, assuming any moderate investment strategy. Not enough to pay for Harvard, mind you, but enough to make Harvard cost the same as any state school. $450 per month would cover the entire cost (note that these are 2009 dollars - properly invested you should be able to beat inflation which makes the 2009 dollars acceptable for comparison's sake). Could a family making $100,000 afford $450 a month? Certainly. In fact, they could probably afford substantially more than that to compensate for the earlier years when things might have been "leaner." That family may not like the sacrifice in standard of living required, but it can certainly be done.

Every person you know who "couldn't afford" Cornell could have paid; they simply couldn't justify the expense. I don't blame them one bit and I would make the same choice every single time, but I also don't complain that it's bullshit.

Comment Re:I stopped reading... (Score 1, Insightful) 459

For example, the access to my country's equivalent to the ivy league schools doesn't depend on your family's wealth, which means that if you are dumb as a door knob and you happen to be the son of a billionaire then you still have to work your ass off in order to be admitted to one of those schools. It also means that if you are terribly smart and talented then you may enroll in those schools, no matter how poor you are. It's raw talent that matters, now raw cash.

Not to disagree with your other points, but Harvard, Yale, and most of the other Ivy League schools have a similar policy. Harvard's current cut-off for tuition, for example, is $60,000. If your family makes under that amount, you don't pay tuition; for reference, the median annual income for US families is in the mid-$30,000 to mid-$40,000 range depending on how you choose to interpret the data.

You may have to pay room, board, and books, but if you're from a family earning under $60,000 and you're posting grades good enough for Harvard then you can probably qualify for a scholarship or grant to cover those costs. If not, government-subsidized student loans are the preferred method for many better-off families and will certainly get the job done and allow plenty of flexibility with pay-off

Again, not to push against your points, but the Ivy's programs for poorer and middle-income Americans are not government-backed, but rather a private choice made by each of those institutions. Some may argue, cynically, that the programs were put in place to deflect complaints about those institutions multi-billion dollar endowments, but the fact stands that the private institutions make those policies.

As for the other side of your assertion: that those with influence can't use it to manipulate admissions to your country's universities? That would go contrary to what the last few thousand years of political history has taught us. I would find it exceedingly hard to believe that - out of every country in all of political history - your country is the one somehow operating without influence by the powerful.

Unless, of course, the powerful aren't manipulating admissions because they're busy sending their kids to Harvard and Yale.

Comment Important questions (Score 4, Informative) 315

Is this common stock or preferred stock? Is the company contractually obligated to pay out profit or a portion of profit as dividends to its shareholders? For that matter, what is the structure of this company? How will this five year period be enforced?

If you can't immediately answer these questions, you need to speak to an attorney. Period. There has been quite a bit of development in the last 10-15 years in terms of small business structure and practices, and I highly doubt that you have enough experience in how this company is legally structured to be able to make an educated decision. At this point, your question is like asking /. which server you should use at your business. We have absolutely no idea about any of the criteria or facts that would explain that situation.

Note that this is entirely separate from the equally good advice that others have been throwing around: if you were ready to leave, why are you now ready to stay for a fairly lengthy period of time? If it's just the money, then it's doubly important to get to a lawyer and have this situation analyzed carefully.

Slashdot Top Deals

"It may be that our role on this planet is not to worship God but to create him." -Arthur C. Clarke

Working...