Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Good grief... (Score 1) 681

The nuts and bolts of computer architecture isn't in the scope of computer science. Sure, you might want to know a little about how things work from an abstract level, but let's be clear; Computer science and electrical engineering are two different disciplines.

There's also a big gap between traditional electrical engineering and computer science - computer engineering really is its own thing.

The standard EE curriculum covers a lot of topics: E&M, circuit analysis and design, signal analysis and information theory, wireless communication, VLSI and VHDL, linear electronics, control systems, FPGAs / MOSFETs / ASICs, etc. The most computer architecture that EE covers is the basics of digital logic, and *maybe* a selection of other topics, like memory addressing and a basic instruction set, but it's really just an introduction.

Computer engineering is where the rubber meets the road, so to speak. Consider all of the specialized things that a computer engineer studies: processor design, instruction sets, memory / storage / caching, buses / pipelines / wire protocols, networking, parallel processing, power management (especially conservation), GPUs, firmware and device drivers, multithreading and stack analysis, security systems...

My point is simply that a typical EE barely scratches the surface of CE, and a typical CE has only a modest overlap with both EE and CS.

It's frustrating that so many people don't appreciate just how deep and rich and technically challenging these areas are. It's oddly stylish to diss CS and CE as of a lesser scientific caliber than traditional sciences - to look at a computer as a commodity, a basic web browser wired to a basic keyboard. Very disappointing in general, and it's culturally perpetuated by offhanded comments like Nye's.

Comment Re:Sweet F A (Score 1) 576

Even if they were at our level of technology, if they have starships, then they have nuclear weapons. They don't have to invade, they can simple drop rocks or nukes on us to accomplish the same thing, and there wouldn't be anything we could do about it...

Yeah, nukes aren't even necessary. A lump of any kind of matter, parked at the top of Earth's gravity well and possessing sufficient bulk / shape / cohesiveness to deliver a sizable mass to the surface, will be devastating. If you can multiply that by, I dunno, several thousand - you have a fairly low-tech and cost-effective means of civilization annihilation.

This whole "gravity" thing is really a bummer. If mankind can ever conquer its internal existential threats - global war, nuclear proliferation, climate change, and the cultural-dumbness bomb called "the Kardashians" - then our own gravity well becomes our largest existential vulnerability.

Comment Riiight (Score 1, Insightful) 85

"There is reason to believe that at least some of its computer-conceived inventions could be patentable and, indeed, patents have already been granted on inventions designed wholly or in part by software."

Right. And according to Fox News, "It is SAID BY SOME that Obama isn't a native citizen. Not that *we're* saying it, mind you, so we can't be held accountable. It's just, you know, THEY said it was true. Who's THEY? Well, we can't tell you, and we can neither confirm nor deny that we're using 'they' in place of 'we.' So we're just going to state that it's some number of unnamed experts, or the public at large, or whatever. You know... THEM."

Comment It's jetpack technology: always 10 years away. (Score 2) 248

Over the years, I've invested thousands of dollars in several home automation platforms. I've yet to have an experience that I'd call "good."

Candidate #1: X10. Future-tech, circa 1978.

  • Pros:
    • Drop-dead simple implementation - there's a physical dial on every receiver to specify a code, and a physical dial on every controller to specify which codes it controls.
    • Supported by a broad set of manufacturers back in the 1990's.
  • Cons:
    • Wildly unreliable protocol = don't count on your lights actually turning on. Flakes out at the drop of a hat.
    • Hardware had extensive quality issues. Devices spontaneously died without warning. Wonderful if you enjoy debugging your light switches; terrible for people with better things to do in life.
    • Even when working perfectly, the latency was unacceptable: waiting a full second for your lights to turn on becomes painful fast.
    • No centralized management. Communication was largely one-way - switches broadcast; receivers receive - so things like "reporting status" and "verifying connectivity" were impossible.
    • Protocol security? What's that?
    • Deprecated and dead.

Candidate #2: INSTEON: The Commodore Amiga of home automation.

  • Pros:
    • Designed with a lot more redundancy and reliability than X10. Something about mesh network communication and blahblahblah.
  • Cons:
    • Overpriced. Holy crap, overpriced. Starter kits that controlled a single lamp ran for like $500.
    • One vendor = extremely constrained range of products. Sure, some of the gear had backwards-compatibility with X10, and mixing network gear was a great way to drive yourself insane fast.
    • Terrible business model = stunted growth and slow, painful death.

Candidate #3: Z-Wave: The People's Home Automation Platform.

  • Pros:
    • Totally open protocol! Anyone can make a Z-Wave-supported device!
    • Potential for built-in reliability through mesh communication, etc.
    • Hierarchical mesh architecture can be centrally managed by a hub.
  • Cons:
    • "Anyone can make a Z-Wave-compatible device" =/= "anyone can make a *good* Z-Wave-compatible device."
    • Entry-level devices are cheap, but inadequate. Fully-capable devices are reliable, but expensive. There are also expensive devices that are crippled, but no cheap devices that aren't. Have fun with that.
    • The architecture is both overcomplicated and poorly documented. Want to figure out how scenes work? Plan on setting aside an hour to scrape together bits and pieces of information from different vendors, and glue them together with guesswork and trial-and-error.
    • Lots of potential... not as many products. In theory, Z-Wave is great for motorized blinds. In practice... there's like one company offering an overpriced half-baked product, and an Instructable DIY video.
    • Hub architecture is feasible... but good luck finding a decent implementation:
      • SmartThings wants to be hip and polished, but feels like it was designed by ADHD-afflicted high school students as a summer project.
      • MiCasaVerde / MiOS / Vera is ambitious... i.e., overambitious, i.e., no support. Great for those who enjoy hacking a commodity-based Linux box and digging through log files to figure out why the kitchen lights won't turn on. The Facebook group is kind of surreal: it's a company rep posting happy-happy-joy-joy patch notes, and dozens of people asking why their Vera won't respond and why customer service won't get back to them.
      • Home Depot Wink is a subscription-based service. Let that sink in: you'll have to pay $x/month for the privilege of automating your light switches.
      • A handful of weird, little-known contenders exist (Staples Connect, ThereGate, the "Jupiter Hub," etc.), with virtually no buzz (and the bit that's there is typically poor).

    Candidates #4-50,000: Belkin WEMO, ZigBee, MyQ, Ninja Blocks, IFTTT, etc.: The Home Automation Also-Rans.

    All of these products and platforms map a consistent trajectory - from gee-whiz Kickstarter prototype to "whatever happened to..." entries on the technology scrap heap. They don't ever develop sufficient momentum beyond diehard hacker/maker cliques that warrant serious consideration beyond the level of "free-time toy project" status.

Comment Re:UI code is bulky (Score 1) 411

> You will find that an MPEG2 decoder is substantially more complex than DeCSS.

First, "complexity" and "size" are completely different concepts, and we're only discussing codebase size here. Of course, extremely complex code can be very small (see also: MD5, RSA, etc.), and extremely bulky code can be very routine. The auto-generated code for instantiating all of the controls in a window can be thousands of lines long, but they're very bland and mundane instructions. ("Create a button at coordinates x1, x2, y1, y2; set its border, background color, font, and caption; hook it up to these event handlers...")

But just to reinforce the point: Here is a full MPEG 2 decoder. It's about 130kb, uncompressed.

VLC is the opposite of pretty stripped-down. It does everything, including ripping DVDs and transcoding video.

First, it's certainly stripped-down as compared with other media rendering packages: Windows Media Player, WinDVD / PowerDVD, and of course iTunes.

Second, those features - ripping DVDs and transcoding videos - are not only expected in modern media player packages; they also utilize the same set of core functionality. Just like ordinary playing, ripping and transcoding are basically uses of the output of the codec - the software is just delivering the codec output somewhere other than the screen, like to a storage device, or as the input to a different codec.

Comment Re:UI code is bulky (Score 1) 411

> Is that really 28MB of code, or is that 1MB of code and 27MB of bitmaps, sound files, and other crud?

That's true - media resources are bulky, and thanks to plentiful storage and bandwidth, we don't have nearly the pressure to constrain these sizes that we did in 2000.

However, if you review just the code base for Winzip, I can guarantee that well over 95% of it is UI code as I mentioned above, and much less than 5% of it is the actual data compression/decompression "core functionality."

Here's another example: Media rendering. The actual codec for DVD media is tiny - DeCSS can be printed on a T-shirt! - and the entire source code package for the LAME MP3 codec is like one megabyte - but media rendering apps tend to be huge. And I'm not even talking about bloated monoliths like iTunes; VLC is pretty stripped-down, and it's still 33 megabytes. The logic that it needs to *show* you the decoded video in a proper UI is extensive.

Comment UI code is bulky (Score 1) 411

PKZIP.EXE and PKUNZIP.EXE, together, are about 80 kilobytes.

The current version of WinZip for Mac is 26 megabytes, or 26,000 kilobytes. That's a 32,500% size increase for the same basic functionality.

However, I don't see a lot of people preferring the command-line versions. Why? Because it's easier to drag-and-drop a bunch of files into a dialog box and select an output location and folder, than to type all of that crap into the command line WITH the right flags AND no typos.

Things like menus, options / configuration panes, and nicely formatted help documentation are also preferable to "pkunzip.exe -?", and then remembering that you have to pipe the output to MORE in order to read the six pages of help text spewed out to your terminal window.

UI code is bulky, because it's extraordinarily detail-oriented. Think of all of the operations that your application UI has to support: windows, and resizing, and hotkeys, and scrolling, and drag-and-drop, and accessibility features and visual themes and variable text sizes and multithreaded event loops and asynchronous event handlers and standard file dialogs and child window Z-ordering and printing and saving application configuration info... etc.

If our IDEs didn't include visual UI designers and auto-generate like 99% of that code for us, app development would be horribly stunted AND much more preoccupied with hunting down bugs in UI code.

But all of this UI code is bulky and verbose and nitpicky because the UI is extremely important for any modern app. Thousands of apps exist that feature excellent functionality that is impossible or painful to utilize because the UI sucks.

Comment Hello, FTC / DOJ? (Score 0) 145

If true - how is this not a flagrant antitrust violation?

Company X provides a device that collects personal data.

Company X announces a standard that prevents anyone from using such data for purposes such as advertising without the user's consent.

Company X exempts its own services from this restriction, such that its services - which otherwise compete on par with third-party services - can utilize such data notwithstanding, or even contrary to, the user's explicit withholding of consent.

Company X's services therefore have an unfair competitive advantage that is directly leveraged on Company X's sale of the device to users.

This is pretty much the definition of unfair competition in the form of tying, If the FTC / DoJ Antitrust Division had any teeth and, er, other body parts, it would be all over this.

Comment Patent "reform" (Score 2) 139

I posted an article describing the "why" a month ago. Totally not surprised that the current reform efforts exhibited the same arc.

That general model is exactly why this initiative collapsed as well. Several aspects of this reform - such as "attributable owner" rules, i.e., implementing laws that require patent applications to reveal the real party of interest in the case, as a measure addressing shell companies - were supported by large interests that benefited from them, and opposed by large interests that didn't. The result is stalemate, just as we've seen countless previous times in the patent "reform" discussion.

The only measures that make it through the "reform" system are mild improvements that don't affect some entities differently than others. And even those can be difficult - e.g., the first-to-file change in the America Invents Act is great for well-funded enterprises, but more problematic for small businesses. In that case, large enterprises simply steamrollered the opposition with lobbying cash.

The upshot is that the "reform" sytem is, itself, deeply dysfunctional. An additional tragedy is that efforts that would objectively improve the patent system for everyone, such as giving examiners more time to perform their examination and implementing more accountability for technically incorrect arguments, get lost in the struggle.

Comment An easy choice... (Score 5, Insightful) 829

The key to this dilemma comes down to one word:

"Microsoft will face an unenviable choice: Stick to plan and put millions of customers at risk from malware infection,"

I don't think that Microsoft actually considers these people "customers." I think MS very distinctly considers them non-customers of their flagship product, since they have not purchased any of the four latest versions (Vista, 7, 8, 8.1). All of Microsoft's customers should have followed its exhortations over the last five years to spend a few bucks and upgrade dump their now-13-year-old OS.

It's indisputable that across the computing industry, the perceived mandate of legacy support for next-gen OSes is increasingly feeble. In non-desktop markets - e.g., consoles and phones - the presumption was never there to begin with (starting with the Super Nintendo!) Web programming exhibits similar tendencies - how many Java applications from back in the day won't run on modern browsers? And won't that include the entire Silverlight platform in a few years? The tendency is that the river of upgrades will carry all projects of significance along in its current, and the projects that gather on the banks (i.e., don't receive newest-OS upgrades) are... detritus. For right or wrong, that's the view.

Comment Re:Ugh (Score 2) 169

> Flipping the classroom and making you work in teams are completely different things.

That's true, but you've missed my general point, which is: For students who are good at learning on their own - i.e., the cream of the crop - class time spent on verifying that they are learning the material is a complete waste of their time.

That is actually my biggest complaint. Typically, I would spend two hours in a traditional lecture learning, and four hours outside of class with independent learning and skill development. Instead, I now spend six hours outside of class learning everything on my own, and four hours in class proving it.

One of the most important skills to be developed in academia - particularly at the undergraduate level - is the ability to learn independently of a classroom agenda. Being asked to spend several hours per week in class working problems for the instructor, so that he/she can help with problems (or, as in my case, baby-sit the progress of the class), is not only inefficient for people who can learn on their own - it actually discourages the development of this skill: students don't need to be diligent about mastering their skills on their own if the classroom time is solely used to push them through the process.

Comment Re:Ugh (Score 2) 169

> It seems to me you have only learned half the lesson this method of pedagogy is meant to teach. Why don't you find the other well-prepared and conscientious students in your class, work with them, and shut out the losers?

Because the teams are assigned arbitrarily and we can't switch. We are required to sink or swim with the other schlumps in our team, irrespective of any differences in effort or intelligence. End of story.

Comment Re:Ugh (Score 1) 169

> Count your blessings. You never understand the material half as well as you think you do until you have to explain it to someone else.

I would love to have the option to develop that skill - e.g., voluntarily forming or joining study groups, or signing up as a tutor or teaching assistant. But in my case, I'm essentially required to teach slacking students to protect part of my grade. Thanks to the group structure, there is absolutely no recognition that some students are bailing out other students.

I am working three times as hard as my teammates - learning the material on my own, and then spoon-feeding it to them - and yet, we are all getting the same grade. Please tell me how I am "blessed" to be in this situation.

Slashdot Top Deals

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...