Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:When did validation actually help anyone? (Score 1) 120

Were you doing websites 10 or 15 years ago? I was. Browser compatibility today is phenomenal in comparison.

Yes, I was, and I respectfully disagree. Browsers today do a lot more, but frequently the support for newer features is so specific to each browser and in some cases so unstable that it is completely useless for real world projects, it requires silly amounts of boilerplate and prefixing (= will break at some future point you can't predict, so also useless for production sites that won't have ongoing maintenance), or at best it requires implementing something in multiple independent ways.

An example of useful standardisation would have been all browsers using the same default stylesheet. Imagine how much developer time could have been saved and how many glitches could have been avoided over the years if we had never needed things like CSS resets or Normalize.

If it breaks my JS or CSS, I won't use it unless the stakeholder absolutely insists.

But the point is that these non-standard-compliant implementation techniques don't break anything in practice, because every browser is tolerant of them and will always remain so because far too much would break otherwise. The only downside to not following those standards is that someone can complain you're not following their preferred standards. And someone always will, but unless it really does matter (for example, because it excludes customers and damages your bottom line, or it actually does undermine some sort of accessibility aid) you can just ignore them.

Comment: When did validation actually help anyone? (Score 1) 120

In my opinion governments should require that their sites are passing the HTML Validator and CSS validator tests.

Genuine questions: Who do you think that would help, and why?

This kind of validation can be useful if you need to follow a standard for something to work. If browsers all followed proper de jure standards then this would offer a useful benefit for compatibility, particularly forward compatibility with future browsers.

Unfortunately, most of the major browsers today do not do this at all consistently. Even some of the people writing the standards have basically given up. (HTML5 "living standard"? Seriously? If it changes arbitrarily then it's not a standard.)

The de facto standards that actually matter are how real browsers behave, which dictate whether your page looks right in the browsers your visitors are using today. Nothing else you do today is guaranteed to work tomorrow without regular attention anyway, which is foolish regression from the situation a few years ago for which we can thank Google and Mozilla, but it's the reality all the same.

In my entire career doing Web work -- which is measured in decades -- I'm not sure I have ever seen an example where a project was objectively better off because it routinely enforced having valid mark-up and stylesheets. I have, however, seen plenty of cases where someone has deliberately deviated from W3C standards for a specific, useful reason.

For example, Google have been known to omit mark-up that they were sure wasn't necessary in any browser in order to save a few bytes. Multiply those bytes by a bazillion visitors to their site every day and that's a lot of traffic saved overall. Another common case is trendy MVC frameworks like Angular, which often use non-standard attributes on HTML elements for their own purposes. They could use standard "data-*" attributes, but once you've got a few of those sitting on many elements in your mark-up, it's just noise and excess weight, so they use their own prefix for namespacing instead. And yet, I don't see anyone claiming that either Google's search engine or Angular as a JS framework have failed as a result of these heinous crimes...

Comment: Re:There is no need to prove "further" damage (Score 1) 56

by Anonymous Brave Guy (#49358513) Attached to: Google Loses Ruling In Safari Tracking Case

However, we don't normally award punitive damages in civil cases here in the UK, so even if there is a definitive judgement at some stage that Google was invading privacy and failing to protect personal data, it seems unlikely they will suffer more than a token slap on the wrist from a privacy regulator provided that they cease and desist (as it appears they already have). Unfortunately, civil trials here are not very effective at recognising damage that comes in forms other than actual financial loss and doing much to compensate for it and/or discourage similar behaviour in the future.

Comment: Local rates = OK, everything else with them = bad (Score 1) 137

Hopefully though, the rise of MOSS compliant payment processors should make the system easier to follow - you just put a disclaimer up that final price will be based on the buyers VAT rate, and let the payment processor calculate the right rate and store the records.

Which is, of course, contrary to consumer protection laws in much of Europe. Merchants are often required by law to show tax-inclusive prices for B2C sales. (For anyone interested: I have now received conflicting advice on this from official sources in my own government, indicating that X+VAT pricing is now magically acceptable for this purpose again, despite it largely defeating the point of the previous consumer protection rule by hiding the bottom-line price in early advertising.)

The big problem with the new VAT rules isn't the principle of charging in each customer's home nation, if that just means looking up the rate for a given country from a database instead of using a fixed rate. It's a mild inconvenience, but it's an hour or two of programming work for someone, and with MOSS it's maybe an extra hour to file an additional tax return once per quarter.

For a lot of merchants (though certainly not all and particularly not the really tiny ones) the problem isn't even the need to impose VAT on transactions instead of having a threshold. As I understand it, some businesses selling digital goods in EU states didn't have VAT thresholds before anyway, so they already had reporting requirements here, and in places like the UK that did have a minimum threshold before VAT was compulsory, some merchants would have chosen to register for VAT voluntarily anyway because it was advantageous in terms of reclaiming VAT on their expenses.

IMHO the largest and most enduring problems with the new VAT rules are actually all the other things that came along with charging at customer-local rates, from conflicts with pre-existing laws on things like consumer protection and data protection (or potential conflicts, with inconsistent advice coming even from government departments) to the fact that you also have to match the entire VAT regime in each country not just the rate, which means things like knowing which rates apply to which products or services and the local geographical issues (I hope you're not just looking up a tax rate by ISO country code like, you know, everyone, because that doesn't actually work reliably). And of course you require a standard of evidence for the customer's location that will be literally impossible for many small merchants to comply with; at present, I don't see how it's possible for any fully automated system to be 100% reliable here, even for big payment services with dedicated resources and access to all the relevant raw data, because of those local issues of different interpretations of which product/service types get which tax rates and the local geographical anomalies.

The best part of all is that even the EU didn't manage to publish an accurate source of current VAT rates across all affected states in time for the deadline. The information on their own web site was actually wrong for several weeks after the switchover, because Luxembourg changed their VAT rate on the same day. And no-one wanted the data in an actually useful form so you could do something stupid like importing it into a database, right? PDFs running to dozens of pages that you can scan for relevant information are so much more useful.

Hilariously, Luxembourg are actually being compensated by the EU for these changes anyway, so all the arguments about preventing exploitation of low tax rates by different nations within the EU doesn't look so noble any more either.

Comment: Re:Cruise control? (Score 2) 282

Somebody who can't pay attention to the street signs shouldn't be driving.

No, they shouldn't, but some of them are going to anyway. Since your loved ones will therefore be just as injured/dead if they are the unlucky ones who get hit by a bad driver who was going too fast, dismissing technology that might help those bad drivers to be better, safer drivers seems uncalled for.

Comment: Re:Are the CAs that do this revoked? (Score 1) 133

by Anonymous Brave Guy (#49330839) Attached to: Chinese CA Issues Certificates To Impersonate Google

Yes its a To big to Fail problem, just in another form.

If anything is too big to fail, you are usually better off making it fail anyway as soon as possible to minimise the damage. Some of the problems in the global financial industry today aren't because of inherent weaknesses in the system. Instead they have been caused precisely by allowing organisations to grow too big, or perhaps more accurately by allowing them to take on disproportionate levels of risk, and then supporting those organisations at government level instead of allowing them to go under when they should have.

If your browser throws errors on just about ever site you visit pretty soon "many" people will start using another browsers.

But it won't, because plenty of other CAs are used and plenty of sites don't use HTTPS routinely yet. All the big sites, the Facebooks and Googles and Amazons of the world, would have switched to another CA within an hour. All the truly security-sensitive organisations like your bank or card company or government would update their certificates very quickly as well.

CAs determined to protect their reputation at a time when their industry would inevitably be seriously damaged in the credibility stakes might take longer to issue things like EV certificates as they made a point of fully validating the organisations requesting them. However, basic HTTPS access and the highly recognisable padlock symbol would be back on all the big sites almost immediately. The worst they would likely suffer would be a few minutes of downtime (assuming organisations on that scale don't routinely have back-up certificates with a completely independent chain on permanent stand-by anyway) and maybe a slight increase in customer support calls as genuinely security-conscious users noticed the lack of EV identity for a while.

Meanwhile, any browser that didn't remove a known-compromised CA from its trusted list very quickly would be vulnerable to justified criticism and no doubt plenty of rhetoric built on top about being insecure, and how users mustn't use that browser to visit safe sites like their bank or someone will empty their account. The geeks would get hold of the story first, of course, but as soon as it made front-page news (and something on this scale probably would) everyone would be talking about it that day.

Comment: Re:The Web of trust only works (Score 4, Insightful) 133

by Anonymous Brave Guy (#49330587) Attached to: Chinese CA Issues Certificates To Impersonate Google

Trusting many different CAs has proven to be a bad idea

Trusting any one of many different CAs has obvious vulnerabilities, as this case demonstrates (and it's not exactly the first time the problem of an untrustworthy CA has been observed in the wild). The current CA system isn't really a web of trust, because it ultimately depends on multiple potential single points of failure.

One way or another, in the absence of out-of-band delivery of appropriate credentials, you have to trust someone, so I suspect the pragmatic approach is to move to a true web-of-trust system, where you trust a combination of sources collectively but never trust any single source alone, and where mistrust can also be propagated through the system. Then at least you can still ship devices/operating systems/browsers seeded with a reasonable set of initial sources you trust, but any single bad actor can quickly be removed from the trust web by consensus later while no single bad actor can undermine the credibility of the web as a whole. Such a system could still allow you to independently verify that the identity of a system you're talking to via out-of-band details if required.

Comment: Re:Whatever ... (Score 1) 141

It seems like we probably agree on the general idea here, but I was impressed on a recent visit to a museum where they had mobile apps you could download in advance and WiFi available on-site. Together these let you choose from a number of recommended tours based on duration and topic(s) and then guided you around with directions, highlights, and more in-depth background on various other exhibits you'd pass along the way if you were interested. It was a well made presentation that someone had obviously worked hard to put together, and the only thing that was a little awkward was walking around holding a tablet with headphones plugged in for the whole visit. That's an area where I could see an unintrusive headset might be an advantage.

Comment: Re:Whatever ... (Score 2) 141

People where hostile to people with Cell phones in the 1980's

And today there are quiet carriages on trains, coffee shops with no-phones policies, and generally if you're the guy who talks really loud on the phone then everyone around you still gets annoyed and may actually challenge you if you carry on for long.

And that's for a device that is just an interruption, not a device that a lot of people perceive to be an inherently creepy invasion of their privacy literally because someone just looked at them funny.

In general Google Glass may or may not make it.

I expect technology similar to Google Glass will make it, but I also suspect it will be used primarily for specific applications where it has a clear benefit. I don't think anything too similar will be worn by a lot of people all the time in the near future.

For example, someone walking around a museum might borrow some sort of headset that guides them on a tour and provides background information about each exhibit they are looking at. Staff at a warehouse used for on-line grocery shopping might have a headset that guides them to collect the items purchases in the most efficient way.

However, I think perhaps the tide is already starting to turn against mass surveillance culture, intrusive personalised advertising, and the like. Surely it's only going to get more hostility as things like insurance premiums that people see directly in their bank balance become ever more customised behind the scenes, and as more people suffer significant problems due to identity theft or embarrassing disclosures themselves or know close friends or family members who have.

In fact, I wonder whether even the US government, not exactly a bastion of privacy advocacy, might be having second thoughts about how much personal data is casually thrown around, now that hostile forces are openly doxxing US service personnel and encouraging allies within the US to attack those people and their families at home, as was reported this week.

So if I were going to place a long-term bet on new technologies tomorrow, I certainly wouldn't be backing an obviously intrusive device like the previous Google Glass, complete with tiny camera, always-on microphone, and wireless connection to the mothership. On the other hand, build a device with similar useful features but a less goofy design, and then back it with a widely-advertised and genuine emphasis on privacy so it didn't engender the same degree of hostility from others nearby, and you might be on to something.

Comment: Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

Thanks. I'm really surprised by that. We've used onbeforeunload for ages to give genuine warnings about unsaved changes in web apps, but for security reasons browsers have long forced any prompt message into a standard format and only given pages one shot at blocking the user from leaving. It hadn't even occurred to me that they would still let you spend significant amounts of time in that function or do other user-hostile kinds of things.

Comment: Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

Go ahead, watch the bottom of the page when you close a Slashdot tab. When Slashdot is slow (and it often is) you'll see the "Working" indicator with the shitty little spinny wheel before your browser actually complies and closes the tab.

What is that, anyway? I have the usual blockers and such installed, but something there is still getting through.

And how the hell does it stop browsers from closing the window immediately when I tell them to? There is a defined mechanism for prompting users to confirm they want to leave a page, which Slashdot doesn't use. Why is anything else not just killed instantly when you close the tab, and who gave web pages a vote in whether or not I get to close them in my own browser?!

Comment: Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

If you're getting regular crashes in everything but IE, there's something wrong.

I'm a professional web developer, I work on a wide range of projects, and in recent versions I've seen fatal errors with Firefox and Chrome several times per week on average. Chrome comes up with a "didn't shut down properly" message a lot of the time just from a session loading it, leaving a real-time screen on Google Analytics open for a while, and then closing it down again!

It may be that we are using different definitions here. I'm including things like Firefox hanging (requiring the process tree to be killed) because of problems with add-ons rather than Firefox itself, Chrome shutting down a process but then complaining next time it starts up, or being forced to close and restart Chrome because it doesn't reset things reliably any more if you refresh a page using Java applets.

It's also fair to say that the projects I work on don't tend to be done-in-a-day tweak-a-template jobs. We're using relatively complicated page layouts in web apps, various HTML5 features in quite demanding ways, and handling non-trivial volumes of data in JS. None of this should be a problem with modern web technologies, but they are things a simple page for your local church or a basic e-commerce kind of site probably isn't doing.

Just to be clear, the general level of unreliability I'm talking about has been observed under controlled conditions, over an extended period, across multiple projects, testing by multiple people using multiple computers and operating systems. This is not one isolated case. Unfortunately, the behaviour often differs depending on the particular system used for testing, or might even not be reproducible from one test to the next, so it would often be difficult to file a useful bug report with a good test case, even if we had enough time to do so.

Comment: Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

Yes, it seems we do generally agree about this.

I have criticised the rapid release cycle of Chrome and now Firefox many times myself for the instability and crazy amount of regressions it seems to bring with it (though expressing this opinion seems almost guaranteed to get you down-modded/voted on almost any web development forum on-line) and I'm disappointed that Microsoft is reportedly going to move in the same direction with its new browser project.

Comment: Re:Plug-ins were scapegoats but now we can't go ba (Score 1) 237

But performance of data parsing is still largely sucks in all managed languages.

Are we talking about parsing the JS, or work being done by JS code here? I'm certainly not suggesting rewriting the browsers themselves or major components like the JS engine in a managed language. There are plenty of ways to make much more security-friendly languages than C++ that still compile to self-contained, native code without depending on a heavyweight VM.

This is a highly specific task, really. And browsers have already literally excluded themselves from the rest of the software ecosystem. They come with their own network libraries, DNS libraries, security libraries, video/audio decoding libraries, GUI libraries and so on.

I don't think it's as specific as you're suggesting. The same general balance between needing the control and speed vs. needing security and robust code applies to just about any system software or communications software, for a start.

Ironically, those dependencies on their own libraries (or reinventing all the wheels on the carriage, if you prefer) that were set up to promote portability mostly seem to have adverse consequences that would have been avoided if they'd actually used their host operating system instead of trying to be one. For example, Chrome infamously rendered text worse than the native system functions on all major platforms for a long time, while trying to actually build a site that uses HTML5 multimedia elements has been absurdly difficult for developers because there is so much variation in which exact A/V formats the different browsers support. (Did I mention that Flash could just download an audio or video file in one format and play it on all the platforms it supported?)

Nope. Your memory is playing tricks with you.

Browser crashes due to plug-in bugs were the most often cause for browser crashes.

That may be true, but according to the objective data on the projects I work on (which include some going back nearly a decade and using plug-ins) today's browsers are significantly worse for crash bugs than they used to be.

Chrome comes up with some sort of "I didn't shut down properly last time" warning almost daily, often prompted by nothing but loading Google's own sites. It can't even reinitialise pages using Java applets properly after a page refresh any more.

Firefox has been hanging more for us in recent months than it has for years. This appears to be due to a couple of popular add-ons we use rather than Firefox itself, but the fact that a failing add-on can take out the entire Firefox process is itself a damning indictment of Firefox's basic process isolation and security model, which is still fundamentally flawed many years after every other major browser dealt with this issue.

I recently went travelling, and with mobile devices just a few years old, the built-in browser was crashing just from trying to access various private WiFi systems. Sure, the browser is a little out of date, but that's because to upgrade it we'd wind up upgrading the whole OS, which numerous sources report as basically rendering the device so slow and buggy as to be useless.

The only major browser that does not have major crash/hang bugs with any project I work on today is actually IE, which gets a bad rap for historical reasons but objectively has been vastly better in quality than Firefox, Chrome or Safari for several years now according to our bug trackers.

Here is a simple technical reason: keyboard input. There is no established interface, and generally the interface is highly OS specific, for a plug-in to pass an unhandled widget event (for example keyboard input) to the browser.

That's a fair example, though my immediate question is why these plug-ins ever had direct access to things like keyboard input in the first place, given the obvious stability and security issues you mention. We've been running Java applets embedded within web pages for around two decades, and it's kind of absurd that in all that time and despite the rise and fall of other plug-ins like Flash and Silverlight along the way, browsers and operating systems haven't come up with a better model.

The universe seems neither benign nor hostile, merely indifferent. -- Sagan

Working...