Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×
The Internet

Journal: How we can topple big media - and it's not YouTube

Recent efforts by the RIAA, MPAA and now TV studios to throttle, take down and demonise "file sharing" are seen by plenty of us around here as less of a Copyright battle and more of a fight to retain both their business models and their top-down control over customers and artists.

What we, as an internet community, need to do - and before the cartels get the legal footing to reimage the 'net into a good old fashioned Corporate to Customer "one way street" of content - is take charge. The content needs to be freed, it needs to be available to the masses but not via a torrent client scripted to monitor an RSS feed but rather via any regular set-top box (STB). Most importantly however, it needs to be legal, and it needs to be somewhat better than YouTube quality.

And this is how...

Firstly we need a standard, open and easily implementable set of formats for Audio, Video and Image content. They may not all be patent-free (yet) but it might be a start if these could be H.264, OGG Vorbis and PNG respectively. The latter can also be used for Thumbnails in the feeds which is the next step.

Secondly we need a network of interlinked channels of content, in that a channel can contain any type of content, or links to other channels, with descriptions and thumbnails. As the content will be in standard formats these channels can be direct (or distributed) download links, rather than forcing everyone to stream. There already exists a channel syndication format, it's called MRSS. Again, it might not be perfect, but it's a start.

Thirdly this needs to be packaged together in one lump of a spec, centrally coordinated in something akin to the w3c but open to all to implement. Now anyone making a set-top box with a network connection will be able to allow their users to surf and watch content. STBs with hard disks will be able to pre-fetch new content from bookmarked feeds whilst their owners are on holiday. Geeks with file servers under the stairs will be able to centrally store it and view or listen to it from any device in the house. XBMC will be able to fold it into their kernel, and there will be no geographical limits, no DRM and no central control. Everyone will be able to link to everyone else's feeds and content, creating their own Channels, and this is where step four comes into play.

The fourth step sees the level of user-created content rise above that of YouTube, yes fun and funky home videos will exist on this new media-web but the ease with which anyone will be able to mashup their own channels will act as a filter. If a large site dedicated to SCI-FI hosts their own channel then it may be that they pick up and "syndicate" the high-quality SCI-FI shows that are out there. Imagine surfing your channels one night and finding a new episode of Star Trek : Phase II in 720i ready to roll. This cross-linking, big-site hosting and blogging is what will allow the quality content to rise to the attention of the masses without it being lost in the noise of everything else. Just open your "This Week's New Sci-Fi" bookmark, hit "Play All" and your Saturday night telly is sorted!

Fifthly - the legal standpoint - if these feeds are full of LOST_S04E01_480p_LOL.avi links then we'll be missing the whole point of the exercise - wresting control from big media into our own hands. The above filtering will cure some of that: if the big sites only link to legal content. The key is to start it out hosting Creative Commons content (without mandating it), so you'll be able to listen to a couple of Nine Inch Nails albums and a whole bunch of stuff you've never heard of. Of course this author relishes that idea but the unwashed masses have their existing comfort zones, this is something only pressure and time can overcome. One way may ahead could be that home-brew "radio" stations will take off, playing content they themselves have sourced from the feeds as well as supporting themselves via ads. There will be no licensing fees to pay to the cartels because they won't own any of the content. But once one or six "stations" become popular the nervous users will have that comfort zone of being told what's good!

Ultimately all online content could shift to this model, be it my own dodgy homegrown breakbeat techno (3 CDs full) from the mid 90s to the next Blair Witch Project. And once that shift is in full swing - and the whole thing is legal and available anywhere to anyone with a connected PC, Laptop, Mobile, TV or Toaster - then big media will have no choice but to compete on our terms.

They will not be able to force DRM, streaming or geographical limits (though nothing would stop them from hosting different editions on their .com and servers, physical IP-blocking aside) but they will still be able to exclusively host their own content and even hold off switching on their download for America's Got Sandwiches until exactly 8pm on Saturday night if they like.

They will be able to embed adverts into their audio and video and this will work in their favour - no one will bother torrenting an ad free version if the legal and ad-embedded version is already their on their TV at the click of a button. We can even allow the MRSS-like feed spec to embed links to "Buy the plastic disc edition exclusively from our online store" or "Visit our merchandise store for concert tickets and exclusive must have hand bags" right alongside their feed if that will help them get on board.

At the moment we don't have a coordinated, easy to consume free media distribution system online. There's content embedded in web pages, streaming via flash applets, downloadable via http and ftp. There are Creative Commons searches and archives, there are torrents and there are even plastic disc editions. There are also a plethora of Miros, iPlayers and codecs galore. But if instead of containing the access to the content in one application for one or two OS's we make the publication of the content an open, accessible and Really Simple Syndication system for Media we can make it so that Joe Sixpack doesn't have to get off his sofa to watch it - or spend hundreds of dollars on a HTPC instead of a few tens on a simple STB.

If we can do the above - and keep it legal - the cartels won't be able to attack it and that's when the revolution will begin.

User Journal

Journal: How the internet killed the space age

Disclaimer: This theory came to me at about four in the morning, whilst emptying my 2 year old daughter's potty. I was obviously in some kind of semi awake lucid state. Anyway I've done zero research on any of the following ramblings, you've been warned...

The space race had it's origins in the Cold War of the mid twentieth century. The Russians had stolen a march on the USA with their orbital flights so the USA chose to aim higher; for the moon. This is all well known history but what happened to the space age that this foreshadowed? Instead of commercial interests the globe over jumping on the technology and propelling our species around the solar system only a handful of launch companies and satellite specialists popped up and, well, that's been about it until the X Prize came along about four decades later.

Correlation does not Causation make, but around the time the space age was peaking the information age was beginning. And when the fizz died down during the 70s and Apollo gave way to the Shuttle program the Micro Computer started to take over the world, leading to the global connectivity and desktop computing. The common meme is that as of the birth of the 21st Century the family car's onboard computer has more processing power than an Apollo lander.

Having no (nearby) interstellar neighbours to study we only have one emergent species to track, ourselves. But if there's a standard model for said emergence what order would these two steps take? The Bronze and Iron ages lead to each other through incremental technological capability. But what, ideally, would the Industrial Age lead to? (Excepting the SteamPunk fork, which might have been fun.) The required technologies of the Space and Information ages only overlap slightly, as demonstrated by the aforementioned Apollo vs Car meme, and satellite communication technology largely remained analogue until the Information age overtook it.

One indicator of what killed the space age can probably be found in an analysis of where the money is. Imagine the combined revenue of IBM, Intel, Microsoft and Google - amongst many more over the years - been invested into the space age. What would our world be like now instead? Would we still have the internet or would we have exchanged that for a permanently manned base on Mars? Interestingly without a Moore-driven technological explosion the Chunky Buttoned Star Trek future predicted at the time probably wouldn't be too wide of the mark.

Our largest clue is that the space age was very obviously invoked by the cold war, and earlier than the Internet's goal of being a nuclear-proof network. It's also easy to theorise that without this superpower tension the Information age might have evolved more directly from the Industrial age, with it's drive to quicker, better and more efficient production goals. So one way of looking at it is that we're behind in that regard, that the space race was a decade or two long distraction from our natural technological progression. And that the "correct" emergence path therefore is one of Industrial, Information and then Space. If that's the case then we as a species probably got it a bit backwards but we are human, after all.

User Journal

Journal: Star Trek : TNNG , let the debate begin! 1 1

This has been thrown around occasionaly on slashdot in the past and here's my fleshing out of the idea.

- first, start pre production soonish but hold off a few years, bring it out on the 25th anniversary of TNG.

- second, give them a decent ship that looks mean and doesnt have a barber shop onboard. preferably something akin to the sleek and fast-as-a-bastard Exelsior class

- third, show some of the dark side of the federation, TNG and DS9 occasionally allowed us a peek into dimly lit bars, spaceports and mercenary cargo ships, sure you don't need to go all the way to firefly extremes, we want it to be trek afterall

- fourth, give them a proper mission, or three over the course of the series' run, sure have monster-of-the-week episodes to get people interested but also expand on the DS9 theory of an overall plot (but at least decide in the first place what it is).

- finally, install a now older, wiser, and not at all anoying: Captain Wesley Crusher

Here's how i see it playing out in my geek brain, it just needs a few, er, important blanks filling in. the first half of the double-length pilot will see a retiring admiral picard meet up with Crusher at his ben kenobe style hut on some rocky planet somewhere. For $REASON (to do with his experiences gained whilst off with The Traveller perhaps?) starfleet have sent picard to get Crusher on board for $MISSION. They go off somewhere and get stuck up a mountain and after a reversal of fortune on TNG's "Final Mission" with picard helping him out crusher realises he now owes the old boy a favour. so he agrees to go along back to starfleet to check it out. Picard cheekily flies by the shiney new ship to bait Crusher along. and after having dinner with his mum, probably, Crusher takes on the job under certain conditions, especially that he wants a couple of his own guys on the crew (who specialise in Tactical and $SOMETHINGELSE), but not in a Maquis-on-Voyager-oh-dont-we-all-get-along-nicely-after-all way, more like Garak-and-Odo-on-Defiant way, not in uniform, not technically starfleet but under Crusher's command nontheless. For the hell of it you could throw in a bunch of rowdy often drunk klingons, instead of bloody vulcans. Crusher, being a kick ass pilot and engineering wizz will also not be entirely liked by the crew: the sexy assed Youngish ensign girl who he keeps supplanting so he can steer the ship himself (though they end up finally getting it on eventually, probably), his first officer, who's pissed that this guy just got handed a captaincy and he was overlooked despite years of butt licking and the Chief Engineer, because crusher's not only always interfering but also crashing into things.

Set them off on their $Mission, and let battle commence!

User Journal

Journal: a true database driven file system

all files given unique id (local to system). could be based on time file was first created in milliseconds or a counter that starts from a random offset (so that os files arent in predictable locations)

database layout:

file_id = unique id, used by all filesystem operations, shortcuts and link tables
name = name
description = long description
type = type: e.g. "audio/mp3" (this replaces the file extension)
meta = descriptor: audio
flags = read/write/drm etc
binary = the file's binary data
->depends = link table of files that this file needs
->dependant = link table of files that need this file

files in this system no longer have a physical location, they can be mapped to by any means. different users could "see" the filesystem any way they wish:

the windows "my documents" folder would not be a folder but a query or link table, the technicalities of this hidden from the casual user.

mp3 files could be listed by artist, genre, decade etc. and playlists could simply be link tables. all of these could be represented in the GUI much like standard folders are now: a user creates a "folder" and drag-drops files into it to create "shortcuts" to the original. the database spec itself would be internally extensible, mp3 files can have artist,title,albumn,mix,length,bpm,etc set but a track appearing on an albumn and a compilation would need duplicate entries, forcing a fully relational setup. indeed mp3 files would no longer have one name to refer by no more "jimi_hendrix_experience_are_you_experienced_track_01.mp3"!

a linux config file could be simultaneously found in //etc/config and //programs/myprogram/config for example, different runlevel configs could also be seperated this way. uninstalling a program would involve removing all it's dependancies (tracked via a link table); if a dll file has no dependant links, then it is safe for deletion. if a program attempts to install a dll file that is identical to one on the system a copy is not made, only the links are updated, the program installer need not know if this has happened.

seamless network integration, if my brother's pc has a bunch of mp3 files on it and we're on the same network our mp3 lists and queries would appear as one, good gui design would show the distinction between local and external files (e.g. different color) and options to create a local copy would appear under the context/right-click menu. with (buildable) options for getting all-by-artist all-by-album all-in-this-or-that-playlist etc. files that are identical but exist on both could be shown normally or have another color, user definable!

once we get away from the rigidist tree-and-leaf thinking of filesystem and network layout then there's a lot that we can do (and not just sort our mp3s and holiday photos)
Operating Systems

Journal: Suggested new OS FS permissions/security model 1 1

Being a web developer I'm used to having my sites live in virtual server directories. The basic permissions of which are set by the administrator (read/write/execute) etc. But the fundamental restriction in place by default is one where i cannot write or modify a file anywhere above my virtual root directory. regardless of it's physical location on the server.

this imposed glass ceiling could be stretched to program permissions across an OS. Imagine a mail client called Origami Email (OE) (c:\programs\oe) that had a vulnerability exploited by a malicious email. the best the incoming worm could hope to achieve is the modification of the files in the directory it resides (c:\programs\oe\emails) or any sub directories (c:\programs\oe\emails\archive) but not it's parent or an adjoining branch. i.e. the OS core and other programs would be wholly inaccessible to it. all that needs to be done is to have the file system know where the code accessing it originates and act accordingly.

Issues are now raised when it comes to usability, if PhotoEdit lives in c:\programs\photoedit it wont be able to get to c:\documents to open or save photos! So the default permissions would set c:\documents to a DMZ (enabling aforementioned worm to stomp all over it if it wished, obviously) and applications could have run-time set permissions much like web certificates, "always allow" "this time only" etc. so a more secure setup would allow PhotoEdit full access to c:\documents but only after the user first tried to use it and clicked "always allow".

Allowing any executable access to the whole file system upon a simple request is an outdated methodology. OS's should instead move away from hacked together "system folder" traps and towards a more "top down" approach. This is also simpler than a firewall type approach to FS tech as the OS root is fundamentally protected "out of the box" by being on a different branch of the FS tree. And a mounted virtual directory approach could also be included, the net could be easily firewalled by having the tcp/ip stack a root mount (c:\tcpip) with programs reading and writing to it as they would a file (c:\tcpip\http\\

It doesnt take too much imagination to then extend this approach into ram, where programs reside and what address space they can influence should directly mirror their position in the FS. Therefore also removing the ability for malicious programs to subvert the FS protection by jumpimg address space into a region with full FS write/execute permission

God helps them that themselves. -- Benjamin Franklin, "Poor Richard's Almanac"