Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Can this be used for graphene semiconductors? (Score 1, Interesting) 42

The article mentions that one can change the bandgap of a material with the laser. Isn't this what has been holding back graphene semiconductors--that they have a zero bandgap? Could this technique be used to produce practical graphene semiconductors?

Comment Re:Datacaps anyone? (Score 1) 140

Yes. Just artificially dropping some packets (either deliberately or just to implement some notion of quality-of-service) can be problematic. While an established TCP socket can deal with this, doing a DNS lookup [which is datagram based] can be severely affected. My ISP implements QoS and most of the delay I experience is a failed DNS query that must timeout and be retried (e.g. I'll wait a minute to get a page load but 55 seconds of that is waiting for the DNS request to succeed).

It's a way to censor things without doing outright censorship (e.g. blocking Google 100%). It's my belief that when Google was negotiating with the Chinese government about access, they argued strenuously, but in the end, they took the best deal they could get [were offered]. I mean, if Google had taken a stronger stance (e.g. "we won't limit access"), what would the government's response have been (e.g. 100% blockage) and what would the Chinese people's response have been?

Rest assured that your government's artificial push for Baidu was known here in the US, not that we've been able to do much to help. Our government people do talk with Chinese officials about such things [and many more], but your government is usually quite stoic in its responses ;-).

Economic freedom and political freedom are two sides of the same coin. You can't really have one without the other. Note that I'm not talking about capitalism per se. Many European countries have a modified form of socialism, but still have a government that fosters political freedom.

Hopefully, Google's initiative will provide some improvement/relief. Time will tell.

Comment Datacaps anyone? (Score 1) 140

Seems to me the limiting factor will be ISP datacaps.

The ISPs that tend to have them are the ones that also want to send content (e.g. U-Verse, Comcast, to name a few). Datacaps limit peer-to-peer networks.

A more sinister interpretation is that datacaps limit the amount of traffic that the NSA has to sift through. The ISPs that seem to have the greatest track record of caving to NSLs, etc. are also the ones with datacaps. Coincidence?

Thus, datacaps also apply when one's "friend" routes traffic through one's connection to support a distributed VPN scheme.

Comment Re:Will this stupidity ever end? (Score 1) 228

One of the comments in the original article ["Julian"] claims to have the router and have verified it.

It's not hard to verify it. Create a perl/python/whatever program [on a PC] that mimics the "User-Agent:" string and tries to do something that would be password challenged otherwise. If it succeeds, the access method exists [and is exploitable--from the local LAN if nowhere else].

What isn't clear is whether the User-Agent: hack can be used from the router's public IP address vs. just 192.168.x.y [local LAN] or even if 192.168.x.y can be used.

Someone else posted that the hack is used by firmware programs running inside the router to change configuration [which is legitimate for them to do] by sending the browser inside the router the request. So, it seems the intent of this was less of a backdoor (e.g. where D-Link et. al. could remotely take control of the router) and more of a way for the router to do its job.

The fact that the User-Agent: string is a funky, password-like string vs "internal_legitimate_request" indicates that the code author knew it could be exposed to the public/LAN IP addresses and tried to [weakly] mitigate this. The weakness is akin to disassembling code for a shared secret key.

A better way might be to add an additional restriction that such internal requests must also be from a known internal source (e.g. an AF_UNIX [vs. AF_INET] socket that presumeably could not be faked from an external source). But, this would take more time-to-code, sophistication, code space. Perhaps deemed not worth it for a $100 router.

Comment Re:Sure, it's good today (Score 1) 415

Lament the hobbling of innovation by an incompetent patent office.

Agreed.

But, in fairness to them, it isn't just competence, but also they're [very] understaffed. They have a [huge] backlog which they've been tasked [by Congress?] with reducing. Unfortunately, if they deny a patent, the inventor may refile. Deny again and refile again, ... The only way for them to truly clear the backlog is to approve the patent. This kicks frivolous patents into the court system--which may have less expertise but more resources [jury pools, appellate courts, etc.]. Not the right thing to do, but possibly understandable.

I have some hope for the "America Invents Act". This makes it easier for interested third parties to dig up prior art and submit it to the USPTO and trigger a reexamination. Crowdsourcing by experts in a given field may help stem the tide. But, further legislation is probably required (e.g. outlawing software patents). And, I say this as a programmer.

With some poetic justice/irony, Apple has lost its iOS "rubber banding" patent in the EU due to a recent German court decision [was reported on Slashdot, I believe]. This is because Steve Jobs demonstrated the rubber banding feature at at a conference and said "Boy, have we got it patented". The actual patent application was dated a few months later. Unfortunately, while this is okay in the US, in the EU such a [public] disclosure before the patent is filed nullifies the patent [Steve's talk is considered prior art].

Comment Re:Sure, it's good today (Score 1) 415

IIRC, the Apple magnetic connectors are under patent.

Ironically, someone at my local Starbuck's had a charger that had hinged power prongs. One of them broke off.

I hope the EU is allowing for the new USB 3.0 "high power" [100 watts vs 5 watts] spec because we'll need that to charge the higher capacity batteries that are about 3 years away. Some of these have 10x the capacity of present batteries and who wants to wait 80 hours to charge them?

Comment Re:The graphics were simply brilliant (Score 2) 374

Agreed about the 3D shoot-em-ups.

Myst-like games still live on with some changes [I've played Myst/Riven and all of the following]:
- 7th Guest/11th Hour (historical) -- lots of puzzles
- Lara Croft series -- shoot em up, but that wasn't the whole game
- Nancy Drew series (20+ games) -- solve a murder mystery
- Art of Murder series -- be a female FBI agent and stop a serial killer
- Yesterday -- play various characters (including one with no memory) with a progressive mystery storyline

All of these have tough problems/puzzles but do this in a more conventional setting. It's tough enough to solve the puzzles without having to do this in an unfamiliar world.

Myst had mythos [of Atrus, etc.], but it was a bit sterile. There wasn't as much of a storyline [where characters can actually converse]. The newer games are more like interactive fiction. Each of the characters has a distinct personality and you actually start to care about them, just like you would in any good story.

As to game mechanics, some of the modern games have a button to highlight the sensitized areas in a scene. Otherwise, it's genuine work to poke every pixel looking for something. Newer games also have a "give me a clue" button. This helps if you're engrossed in the storyline and reach a point where you admit you're stuck but still want to proceed through the game--for the story.

Myst had no such "I admit I'm flummoxed" help. After a while, it just became work to drudge around the same area looking for the way out. Of course, one could always consult a walkthrough, but that highlights the need in the game itself for something to make it enjoyable for all.

Also, Myst may be having problems because of Ubisoft's stance on DRM. I'll never buy another Ubisoft game, ever, because of it.

Comment You can never quit "The Family" (Score 1) 286

If anyone was thinking of breaking up with the NSA family, the letter states, “We want to put the information you are reading and hearing about in the press into context and reassure you that this Agency and its workforce are deserving and appreciative of your support.”

Family == Mafia [*]

[*] or used to be until the National Stasi Agency sullied the term ...

Comment Re:Double time (Score 1) 68

Yes. From the blockquote: "constitute a powerful endorsement of the carful work". Guess somebody wasn't carful "E"-nough :-)

The self congratulation aside, they had different goals at that time. They were looking for a unified/legal definition of time that could be universal.

I don't think even many programmers realized the implications at the time either. I mean, Intel was rolling out 5Mhz chips then. The focus was rolling out PC's as a credible alternative to the mainframe world. As the CPU's sped up, the problems became more apparent.

Nice website--thanks.

I found another page on it that has an interesting idea: Set up an ntp stratum 1 server driven by a GPS clock. Don't transmit leap seconds. Adjust the time offset a bit [to compensate for the initial syncup of GPS time with UTC], modify NTP client code, then handle all leap seconds via the zoneinfo data files.

Comment Re:Double time (Score 1) 68

During the one second interval when they are applied, what happens to [UTC] timestamps on files that are modified?

Whoosh.

Whoosh yourself ...

You're not describing a problem, you are the problem. If an event occurs during a leap second, you simply timestamp the time, 23:59:60.x.

Yes, that's what the spec talks about. But, it isn't how times are represented internally in systems. They use binary [64 bit] numbers that are [usually] offsets from Jan 1, 1970 [the Unix epoch]--except for MS, of course.

It's what happens to a [normally] monotonic sequence: n,n+1,n+2 when you present n,n,n+1 or n,n+2,n+3 instead at the binary level and how you handle the discontinuities [if you're even in a position to know the discontinuity has occurred]. Most software just sees any and all of the above as k,k+1,k+2 so it has no way to compensate. In most cases, it can't because the leap second insertion is done in such a way (at the NTP server) that it is invisible even to the OS, let along a userspace program. The user program would have to know the internal OS policy [which it usually doesn't].

Some programs don't care what n/k is [it could be 31579 or 0] because they're more interested in a time delta across a [very] short interval. Leap seconds wreak havoc with that. That's why Google came up with its "leap smear" solution.

Really, your description of how you timestamp a file shows complete ignorance of how OS's, filesystems, processes actually handle these problems.

And that you're not a programmer. I am. In the kernel/driver/realtime arena for forty years. I deal with keeping systems synchronized at the nanosecond level all the time. So, please stop preaching to the choir.

What you describe simply demonstrates the problem with incorrect assumptions and sloppy coding: that a minute can't have more than 60 seconds. That hasn't been the case for over 40 years.

The assumptions weren't incorrect. They were just ones you weren't aware of [see above]. Your [repeated] use of "sloppy coding" sounds like armchair quarterbacking. Have you analyzed the cost/benefit across a wide range of applications and the risk associated with trying to apply a fix that may have little to no benefit? For example, which programs and what remedy should be applied?

based on POSIX-compliant UTC

'taint no such thing. There's UTC, and there's POSIX, which is in no way compatible with UTC, which existed long before POSIX.

If I had said "POSIX-compliant [use of] UTC" instead, you'd have nothing to argue about.

And leap seconds were not part of the original UTC spec. The folks that added the leap seconds [however well intentioned] didn't anticipate the complications to existing systems already using UTC (e.g. Unix which predated leap seconds by a few years). Now, many are questioning their [marginal] utility versus the complexity they [still] engender [particularly for realtime systems, such as video broadcast], which is why the discussions are going on about removing them from UTC.

Just another example of brain-dead, incorrect assumptions about timescales.

I've also worked on commercial grade realtime video broadcast systems, where I've had to coordinate 5-6 timescales at one time. So, my friend, what's your specific expertise that justifies the invectives?

Since July 2012, IIRC, all corrections are now formally set at six month intervals.

YRI. You should at least try to understand the subject you're discussing.

"A positive or negative leap-second should be the last second of a UTC month, but first
preference should be given to the end of December and June, and second preference to the end of
March and September.
- ITU-R TF.460-6.

I absolutely do recall correctly. ITU created the spec, but IERS administers it. You could get them in any month, but IERS changed that in July 2012 [by a policy shift, if nothing else]. See IERS bulletin C [the latest edition]: ftp://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat

Leap seconds can be introduced in UTC at the end of the months of December
  or June, depending on the evolution of UT1-TAI. Bulletin C is mailed every
  six months, either to announce a time step in UTC, or to confirm that there
  will be no time step at the next possible date.

Notice that it doesn't say any month. It doesn't matter what the ITU spec says, but what IERS actually does/will do.

Comment Re:Double time (Score 2) 68

Leap seconds are problematic not because of sloppy coding but because they are a messy problem.

During the one second interval when they are applied, what happens to [UTC] timestamps on files that are modified? If you're compiling, an object file may be updated or not. Or, with rsync, a file may be transferred or not. Or, what about medical equipment [based on POSIX-compliant UTC (e.g. Linux, etc.)] that gives a patient an extra one second of radiation because the master clock is based on UTC?

Also, applications [which generally don't account for leap seconds] would get lurched, sometimes with disastrous results.

That's why Google came up with a "leap smear": http://www.theregister.co.uk/2011/09/19/google_has_to_lie_to_computers_about_time/

The solution we came up with came to be known as the 'leap smear'. We modified our internal Network Time Protocol [NTP] servers to gradually add a couple of milliseconds to every update, varying over a time window before the moment when the leap second actually happens. This meant that when it became time to add an extra second at midnight, our clocks had already taken this into account, by skewing the time over the course of the day. All of our servers were then able to continue as normal with the new year, blissfully unaware that a leap second had just occurred.

I confused nothing [re. leap days]. That was just a loose analogy. Nobody would notice if we applied leap seconds or not. Not even a leap minute every fifteen years would be noticed. And that's the maximum error you can have. Nobody will care if the sun sets in the west a minute earlier than you expect it to [based on TAI]. Better yet, since we tolerate daylight savings time, wait until the leap second error hits an hour. That will take 900 years minimum.

Since July 2012, IIRC, all corrections are now formally set at six month intervals. While most corrections have been in one direction, if we just kept track of the error, it might self correct over a long enough period.

Converting all systems (computers, cell phones, NTP, etc.) to use TAI internally would be a good thing. Convert to UTC when displaying the time [could be a user preference]. Everything becomes simpler and more accurate.

Those that truly need to track solar time are a minority (e.g. astronomers, etc.). Let them do the extra conversion rather than burdening the rest of the world [and the systems we use in our daily hitech/21st century lives].

Comment Re:Double time (Score 1) 68

Perhaps it's time to go the other way and use GPS/TAI for computer clocks instead of UTC. In other words, perhaps civil time should be changed. Just because it used to be based on something doesn't mean it needs to be in the future.

Leap second adjustments at the time that they occur are problematic for OS scheduling during that period. Also, you have to maintain a table of whether the twice a year leap second has been applied [and in which direction]. The table must be updated every six months because a table entry can't be predicted beforehand [because it's decided upon by IERS]. The application of leap seconds is actually quite irregular.

All this complexity for a [possible] adjustment of one second every six months?

Alternative schemes have been proposed that would accumulate leap second error until it truly becomes significant (e.g. do a leap minute adjustment every thirty years).

If the timebase that we use in the modern world (e.g. computers, cell phones, banking transactions, etc.) needs to track the sun that accurately, why do we tolerate error accumulation of up to a day every four years [leap days]?

Comment Re:Treason.. or... (Score 1) 524

More likely obstruction of justice or some such.

A person with a security clearance revealing classified information [to another person] is treason. But, the receipt of such information is not [although a stretch, some cases have tried to charge the receiver under the Espionage Act, even though they are U.S. citizens].

Side note to U.K. citizens (the ones that [still] insist they don't need a formal constitution), receipt of classified information is treated exactly as disclosing it under "The Official Secrets Act" [IIRC].

However, "aid and comfort" might cover Meyer's assertion (e.g. failure to comply gives "aid and comfort" to an enemy) ala harboring a fugitive [electronically speaking].

In any case, it really is time to say: Enough is enough ...

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...