Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Journal: Reading as a Transaction

Journal by mugnyte

There's been a bit of flurry in marketing and advertising circles ever since Danny Carlton posted a rant asking for detection of Firefox's Adblocking plugins in web sites and for them to stop delivering content if advertising could not be sent as well. This went through the usual wash and wear cycle here on slashdot and digg with the expected results: Silly boy! Many good arguments were made, here's an expanded summary from Digg, Adblock's own forum and Slashdot:

Is it illegal to block ads?

  • There's little legal ground to claim a web site is a copyrighted work in total with its ads. The true intellectual content of a web site is always in the new information presented. Otherwise, a conglomeration of ads in a specific configuration (with no original content at all) could be considered copyrighted and of value, when it is commonly understood today that such pages are "empty".
  • There are no laws requiring any content gained from web sites to be strictly bundled with their advertising in all mediums. For example, one may copy/paste text from a web site, store the sent HTML file, or simply re-write the concept into one's own web without original site without breaking laws, save for the copyright restrictions the source has outlined (noncommercial/personal use is always open).
  • Radio and TV consumers who skip/record around commercials are not stealing, a legal decision descending from the tape recorder, VCR and CD, with DVD muddying the waters with the prevention of consumer breaking protection schemes. Technically, the equipment makers have been "incentivized" to not provide automatic commercial skipping by content generators, but nothing forces one to consume all ads on a recording - you can skip them without infringement, albeit perhaps manually. There are no ad delivery mechanisms online that can be construed as using "protection scheme" that ad blocking software would be "cracking". Indeed, the requirements for having a proven and powerful protection mechanism have been tested legally several times, and trivial cases have been discarded.

Is it technically feasible to prevent ad blocking?

  • Browsers are under no obligation to display any portion of an HTML file they deem incompatible with their parser, display system, etc. Audio, print and text browsers fall under these scenarios already, as do older browsers. In total, there is no lasting way to discern one browser from another; if a Firebox release provides an "Exact IE mode" (identification, behavior matching per javascript idiosyncrasy, etc) the web site could not make a distinction between Firefox and IE.
  • If a web server forces its own content to be delivered only after all advertisements are sent to a web browser (instead of sending content inline as is typically the practice now), a totally new paradigm must be built into many content delivery systems. In these systems, advertisements source from other servers and can be requested in any order. Additionally, even if all content is delivered, it need not be displayed, as outlined above.
  • If all advertising is delivered from the source content web server, instead of various other ad-syndication servers, ad blocking must discern between content and commercial. However, ad blocking is not typically based on the semantic meaning of the content, but the format (image, animation, etc). Thus, unless a server revolves the names of the files it serves as advertising, it will have those files blocked after their first impression, and other file (content) will be kept. One could argue that this is "post impression", but as modern blockers share information from client to client, a browser could find that all of its revolving image names are blocked on all clients after one impression anywhere, which goes back to blocking pre-impression.
  • Ad blocking is implemented in some fashion on all major browsers (IE, Opera, Safari). Singling out Firefox is simply from a lack of knowledge about the browser marketplace. This is a feature born of the grassroots request for browser extensibility. There is no technical way a browser itself could regulate the actions any of its extensions, save for the API to influence the root browser behavior. The HTML stream and request stack is usually under the control the extensions - this is the core method for providing extension functionality.
  • Ads that use Flash (or now, Silverlight) must ask that browsers have this extension enabled. Nothing requires a browser to have this besides the site itself. Some sites deliver a non-flash version, some do not. As Silverlight and Flash, and possibly any other graphics package compete for market share, web sites cannot expect all browsers to have these extensions present. Thus they must accept that only a subset of total web users will use their site if they employ such delivery mediums. This is the status quo and is well understood.

Is it unethical to block (some) content from web sites?

  • Every real-world metaphor applied to Mr. Carlton's request opinion fails to prove his point. Customers who discard newspaper inserts without reading them are not stealing. Passengers that close their eyes to roadside billboards (revenue from which can pay for maintenance of the road they are using) are not stealing. Removing bundled software with a package software install is common and in most cases prudent.
  • Web sites that use advertising as primary income without selling a product themselves are highest impacted from ad blocking software. However, the information they provide is essentially free and replicated once online (the original content may not be replicated exactly, but the information is paraphrased, summarized and editorialized). Thus, one cannot expect such information to be a constant source of income via ads as that information becomes "stale". Only through constant updates, living up to the "content is king" euphemism, will a site continue to draw viewers. Such a provider must choose between a subscription model and an ad-support model, and each will have different advantages. Neither can be "policy" for online server behaviors.
  • Mr. Carlton explains that the "monetization of the internet" is the fuel behinds its growth. Historically, this is patently untrue and for a large swatch of the WWW's lifetime, there was a struggle to provide commercial services. Advertising is not the primary method for information dissemination on the medium, since text-based searching remains king for information retrieval. The bundling of ads with content can be a source of income via Google AdSense service, but this is not necessary for all web sites to exist and profit.

  Overall, Mr. Carlton's comments express an opinion that suffers from a short-term knowledge of the technology of the internet. Since its inception decades ago, the continuing design decisions are around freedom of the client side. Information can be gather and collected from almost any source server in an automated fashion. Advertising attempts to link and defend the cost of internet participation with the transfer of commercial information. This has never been true, and today as savvy users remove unwanted information from web sites the core design features become present again: The client can do anything it wants with the information your server provides. Indeed, this freedom is how innovation and creativity caused the web to explode in popularity entirely; the web grew from having an immense data source of users, references, content, and techniques available for manual or scripted use.

  The rise of ad blocking can be attributed to the disconnect of a web site's content to the relevance and design of its ads. Most unintelligent ad servers do not screen their delivered content for offensive or annoying presentations. Web sites have traditionally had little say in which ads could be rotated through their system, save for the largest categories of pornography, violence, etc. The top provider in the market at the moment, Google's AdSense, does allow for the strongest intelligent delivery and ad filtering concepts. It also works to remain relevant to not just the site, but the content itself, as well as any behavior about the client that can be tracked. This results in the least annoying, and subsequently, least blocked ads. One could infer from this relationship that if web sites took more personal responsibility for the ads they delivered, users would accept/reject the site in total, rather than blocking/partitioning the content.

  This author expects that, like all things online, this meme of Danny Carlton's will not die fully but live on in a limbo in perpetuity. Functionally it will not change much of the function of the web. If a large percentage of the online users implement file/script blocking, we may see a movement for advertisers to provide full-page ads that require an impression before moving on to ad-free pages. As an example: experiments with this format with some success. It cookies the impression and one can use the site without (full-page) ads afterwards, and also provides a 2nd tier subscription model for more content. For Salon this is a good balance, since the site seems to take responsibility for it's advertising, much like the models for TV broadcasters.

User Journal

Journal: Game Idea

Journal by mugnyte

FPS is slick but tired. It's becoming a commodized format that is sorely in need of not just new genres, but new gameplay mechanics. In other words, the actions of "Move To Place"/"Act On Object"/"Fight With Being" are Ye Olde Hat. Even when combined into a neat bunch of puzzles, to achieve a new location/object/fight, are flavors in an ever-diminishing cup.

  The market for games is new large enough that targeting a subsection of that market could be profitable. If the commodization of the capable engines continues, focus on the design content, storyline, and others may soon follow. I already know of a plenty of texture manipulation packages that line up the "to do" list of surfaces and transitions so that you have to simple fill out the spots (albeit a huge lineup of nice images) to re-skin.

  But what I want is to start injecting programmability into the FPS environements. I want to run a Corewars+QuakeArea game, with bots linking to one another, communicating, learning. I want algorithms to be shaped, reshaped by hours of tourneys, and eventually reach the limits of the platform.

  Game AI is finely tuned, since bots could have perfect reflexes, perfect aim, etc. But let's make maps that demand more advanced path algorithms, let's have bots attack the puzzle collections. Let's present the bot language to the gamer, and have them simply watch, or perhaps coach, the bots during gameplay.

  I'd much more enjoy a war sim if the "eye in the sky" coaching was tweaking bots in real time, dealing with variables as "fear", "erratic behavior" or "hubris" in a bot. early on, we're doomed to a limited platform for such things, but with some amount of movement from that huge market, we ought to see games commoditize the AI control platforms much like physics engines are are to be in the next 5 years.


User Journal

Journal: Sensoral Templates for the Web

Journal by mugnyte

Ask a web-based bot to define something, and you're almost guaranteed to get an english paragraph, written in part or wholly by humans and sitting on a site, or in a cache found by a spider.

  Now, this itself isn't bad, and it's gotten us almost all the "reference" material one could ask for on certain topics. Images and videos are now emerging as the next target of the human search realm.

  Now, what if there were a series of extended descriptive templates anyone could fill out? These templates were a general format (like Unicode) that could be used to describe something for a particular sensory function. For example, you wanted to learn the shape of an apple, and a normalized, general format 3D mesh would appear. Also, there would be links to the development cycle, varieties, diseases, etc. For the most part, a "representative sample" for such an open-ended search such as "apple" returned a red 3D mesh, complete with descriptive qualities of the make up this mesh.

But then again, a mesh is really just a surface formula used to approximate a real thing. Perhaps a full-blown language is necessary to codify the whole apple, subsections, and so on. When this language was read through a "visual" template, it returned a picture of the apple, but enabling a 3D slicing/dicing of the object. When run through a "tactile" template, it returned something that could describe the touch of an apple. Smell, Taste, etc. Our computer-human interfaces are really only auditory and visual, so for now perhaps only that information gets focus.

Obviously, this language would be most easily adopted if it:

  • Used existing web technology to encode the proposed language
  • Was approachable through the use of tools and/or techniques that allowed anyone to contribute to the body of information
  • Was globally accessible and yet politically agnostic - not encoding into any regional format and not locked away from any set of users.


  - Extend HTML or a variant (perhaps through some of the open-ended tags/attributes for custom data) to encode the visual and auditory holdings of a concept or thing.
  - Provide a series of tools to encode information into this format. It should take in visual and auditory information to a greater degree than a simple picture.
  - Provide a tool to consume this information, enabling drill-into capabilities for the visual information, markups, etc.

These of course is sort of a Wiki with extended information. In fact, extended Wikipedia in this way would be a great undertaking. "Person" references could bring up a video of that person. This doesn't need much in the way of new technology. However, if "Object" references were more fully flushed out to include this Mesh concept, and a small java window to spin/slice/zoom the object, I think more folks would be drawn to it.

Candidate Technologies:
  - Generalized 3D meshes, or material composition languages. (POV-Ray's C-like syntax is a crude example)
  - Encoded sound in a compressed format, but transferable in both test-based and binary formats.
  - Certain medical imagery interfaces, whereby 3D images can be explored in a CAT-scan like manner.

User Journal

Journal: Collaborative Systems

Journal by mugnyte

The ever-growing uses for information connectivity:

  • HTML, Hyperlinks
  • Blogging, Commenting
  • Peer-to-Peer Networking/File Share tools, personal-hosting
  • Collabortive Knowledgebase-ware (Wiki's)
  • Social Networking Sites

If we can begin to fill out more complex templates for the world described in all this web content, then perhaps a semantic web may be borne of it. At least, a better version of AI or at least NLP could result.

More to come...

User Journal

Journal: Information Age

Journal by mugnyte

An ongoing research project:

The "Information Age" era so far is the 100 years from 1950 to (perhaps) 2050. It subsists not on the year-by-year declarations of "Information wants to be free" or "digital is open" and so on, but merely from the combined long-term effects of:

  • Information and the machines to store/access it are ubiquitous and relatively cheap.
  • The more open flow of information creates a multitude of societal changes, some overlapping and some trumping each other.
  • Society thrashes and skitters on new concepts until they slowly settle, limited only by scientific physical barriers. Attempts to control these changes politically or with laws consume huge amounts of energy, but are eventually defeated.
  • The pioneers of the era are defined by pushing information's movement and creation. Cataloging, summarizing, capturing, copying and distributing information causes a flood of public experiments to fork and mature. Best of breed winners, fragmentation and eventually encapsulation make the interim market.
  • Societal change happens, and yet concepts that divide groups migrate to new mediums, continuing much of the stife between these groups across the globe. Group-based feedback systems repeatedly report that the era's new platforms do little to change minds, and much to only reflect them.
  • The era only ends when the feedback loops on this information are on a slowing curve. At the moment, these loops are only expanding, where new systems to re-process the growing body of information create ever more metadata. Decision-based systems using this metadata are the most useful levels of the era - and only when slowing will the era be in decline.
  • As of 2006, we are in the prime of this era.
User Journal

Journal: F145t P05t

Journal by mugnyte

Ok, so its been about a year of /.'ing and i've come to rely on it as my constant work distraction. Be it reading, commenting or moderation, or modding, I'm addicted.

I'm constantly enjoying how vast the world of computer age really is. The niches are numerous and so deep I get a great sense of satisfaction that I always have a lot to learn.

Even as I try to move among different technologies, I like the "worldly" sense one gets from reading /. I can truely say I've gotten a little education perusing the high modded comments on a subject.

10 to the minus 6th power mouthwashes = 1 Microscope