Posting to undo moderation mistake.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
... not just with AND and OR, but also "within so many words"...
Try Google's firstterm AROUND(3) secondterm.
"I'm going to work for the IRS" said no competent, industrious individual. Ever.
Work for? Yes. Stay? No.
An IRS auditor once told me that IRS experience looks great on a resume if you want to work as a tax accountant or similar. He said the job at the IRS wasn't great, but that the experience was valuable. So, he continued, the IRS has lots of turnover.
I've had returns questioned a few times. The above conversation occured when I was working as contracter and the IRS claimed we'd underpaid taxes after they recorded my quarterly estimated taxes to my SSN in their databases and our recorded our *joint* April 15th taxes to my wife's SSN. It took a couple of years to settle IIRC, and it seemed there was a different agent every few months.
[ A summary of a discussion from http://www.sitepoint.com/open-source-licensing/ and lots of good links re licensing issues. ]
The parent easily spent an hour or few putting that together.
Another good resource is one of the chapters of http://producingoss.com/
I published a project on github a year or couple ago. It is/was in an alpha state and not functional, so there's no huge interest. Figuring out what license to use took some time because of interactions with other efforts. My project makes use of files licensed under the MIT license. I expect to eventually contribute my code to another project that uses what seems to be an ISC license. I need the user to download a library that's only available as source. Not the most complex situation, but researching the license options takes time.
I can imagine that some people want to publish their code, but are either unaware that the default license is "all rights reserved", don't care, or care but don't want to spend the time to figure out what's appropriate.
The default license is "all rights reserved". When you create a new (public) project, github should require you to acknowlege that or to specify a license. A link to a good discussion of licensing issues could be included. The list of licenses to choose from should include "all rights reserved" and "see project-specific license". I'm not sure that "public domain" should be on the list because IIRC you need at least a minimal license to say "no warrenty" and provide a "limitation of liability". Project owners could still leave their code unlicensed.
Seems like a bit of FUD on both sides of the argument...
A while back, I spent a few minutes skimming the Wayland FAQ and Wayland Architecture Diagram. Interesting stuff, especially the architecture page, and they provide actual detail while the TFA mostly doesn't.
For comparison of the two architectures, when running X, you might have an ordinary window manager or you might have a compositing window manager. The Wayland model is that Wayland *is* a compositor that provides both window manager functions and some of the functionality of an X server.
From what I can see, here's the architectual differences between X and Wayland when it comes to supporting remote app display:
Intentionally misstating by being over simplistic, it sounds like the reason Wayland doesn't support remote displays is because it also doesn't support local displays! More accurately, Wayland supports local displays (of course), but unlike X11 provides no way to render to them. Wayland doesn't do rendering; it apparently "just" knows how to swap video buffers to a display device and coordinate buffers between multiple clients.
I'm thinking that, for example, if you want to write a graphical app, you might target OpenGL or cario and then expect your code to work in both Unix (with X) and on Windows (without X). With Unix/X, I'd expect an OpenGL library that handed X primitives to the X server. With Wayland, you'd apparently have an OpenGL library that rendered to a buffer and then handed the buffer off to the Wayland compositor.
So, Wayland isn't doing some of the things than an X server would do. Wayland is never working with drawing primitives. It seems obvious that you'd never be able to run apps that use the old X toolkit libraries against Wayland without an X server in the picture. And, both the TFA and the FAQ report this and note that you'll need X server(s) in addition to Wayland for the foreseeable future. The X server(s) talk to Wayland instead of to hardware.
There's going to be support for running Wayland apps remotely. However, as others have noted, an obvious question is how efficiently a "native" Wayland app could be displayed remotely. If the app and its libraries are rendering graphics primitives into display buffers, it seems obvious that low level primitive operations are lost by the time Wayland gets the buffers, so you have to be able to efficiently transmit bitmap deltas. Queue arguments re whether drawing primitives are more efficient or bitmaps are more efficient...
The discussion of doing remote Wayland apps seems to revolve around how to transmit the per-app buffers across the network instead of handing the buffer to the local Wayland compositor / display driver. Or perhaps about having the compositor know that certain app buffers should be transmitted instead of composited to the local display, but that's almost the same thing when viewed as a flow.
Transmitting bitmap deltas works pretty well, but for some apps, sending higher level information such as the original OpenGL might be better. With X, they defined an extension, GLX for essentially passing the OpenGL to a remote server. Similarly, I imagine it would be useful if an OpenGL app could talk to a remote Wayland server instead of sending bitmap deltas. So, it looks to me like it would be useful to define an extension mechanism for transport that would allow OpenGL commands to be sent to a remote computer where they could be rendered to a remote buffer and handed off to the remote Wayland compositor. Bonus points if the extension mechanism is generic. Seems like transport of audio might be another fit for a modern replacement of X.
Not an X server developer nor a Wayland developer. I may have garbled things somewhat, but perhaps someone could clarify the mistakes and help take a portion of the FUD out of the weekly Wayland discussions.
TL;DR: Wayland should be able to support remote display of individual apps, but it will be based on transmission of bitmap deltas, not higher level drawing primitives. Seems like OpenGL or whatever could be added though.
No, the key has to be inside the passenger compartment to start it.
My sister reports that her car started just fine with the keys on the roof or hood. Not that they stayed there after the car started moving...
You already (should) be buckling a seat/shoulder belt. That's more effort than putting a key in a key hole. So, I don't see that the risks of keyless entry *and* starting make up for the minor convenience of not having to use a key. YMMV.
How is this paper not a scientific approach to empirical data? We have empirical observations of a wide range of animal/human behaviors. The authors propose a toy mathematical model that reproduces key features of several interesting observed behaviors. This is perfectly good science, just like saying "hey, an F=G*m_1*m_2/r^2 force between massive objects recreates the observed motions of the heavenly bodies"
Astrology fits the definition of careful observations, using those observations to explain other observed behavior, and making predictions. Data gathering and theories aren't science by themselves.
--- a predictive, testable mathematical model that can be compared with measurements of the motions and behaviors of actual critters to see how well it works.
Now, with that trailer, you have a definiton of science. "Testable". But, apparently the TFA doesn't include testable hypotheses.
Too many flames in these weekly Wayland discussions and not enough facts (or maybe the facts are downmodded; I've gotten to the point where if I look at a wayland article, I don't read all of the comments).
For comparison, when running X, you might have an ordinary window manager or you might have a compositing window manager. The Wayland model is that it *is* a compositor that provides both window manager functions and some of the functionality of an X server.
Intentionally misstating things rather badly, it sounds like the reason Wayland doesn't support remote displays is because it also doesn't support local displays! More accurately, wayland supports local displays (of course), but unlike X11 provides no way to render to them. Wayland doesn't do rendering; it apparently "just" knows how to swap video buffers to a display device and coordinate buffers between multiple clients.
I'm thinking that, for example, if you want to write a graphical app, you might target OpenGL or cario and then expect your code to work in both Unix (with X) and on Windows (without X). With Unix/X, I'd expect an opengl library that handed X primitives to the X server. With Wayland, you'd apparently have an opengl library that rendered to a buffer and then handed the buffer off to the Wayland compositor.
So, Wayland isn't doing some of the things we'd expect an X server to do. Wayland is never working with drawing primitives. It seems obvious that you'd never be able to run apps that use the old X toolkit libraries against Wayland without an X server in the picture. And, the FAQ admits this and notes that you'll need an X server in addition to Wayland for the foreseeable future.
However, as others have noted, an obvious question is how efficiently a "native" Wayland app could be displayed remotely. If the app and its libraries are rendering graphics primitives into display buffers, it seems obvious that low level primitive operations are lost by the time wayland gets the buffers, so you now have to be able to efficiently transmit bitmap deltas. Queue arguments re whether drawing primitives are more efficient or bitmaps are more efficient... OTOH, it seems unlikely that apps would include their own rendering code instead of using as library. So, we can hope that the libraries offer both wayland and X backends, I guess.
Not an X server developer nor a Wayland developer. I'm sure I garbled things somewhat, but perhaps someone could clarify the mistakes and help take a portion of the FUD out of the weekly Wayland discussions.
Back in the day, there were programs for MS-DOS (and no doubt other OSes) that would use "low level" access to the floppy drive in an attempt to be able to read any known type of encoding (and filesystem too of course). I recall one that supported dozens or hundreds of formats from various types of PCs and OSes (CP/M boxes, Amigas, Apples, etc). This was a couple of decades ago, so I don't recall the names of the software. I imagine a search would turn some of these up. It'll likely be "shareware" or commercial rather than open source, so if you find one, it'll probably be useful only with a good emulator or real HW and not as a reference.
At any rate, I imagine the various computer history museums have specs, S/W, and H/W and that there are few formats that are truely orphaned.
Todays specs and formats are no doubt even less likely to be lost at it's more common to have computer copies of these and even to publish them than it was in 80's and earlier.
OTOH, as others have noted, the shelf life of some of the older media is getting long in the tooth. I've got a 9-track tape from college that's probably unreadable. (And probably of little interest anyway).
I prefer tau day as it gives me an excuse to get 2 pies instead of just one.
For the record I only know pi out to 5 significant digits 3.14159
Fourteen digits are given by a mnemonic I learned in grade school:
How I wish I could recollect of circle round the exact relation Archimede(s) unwound.
Just count the letters in each word.
Decades ago, when the first bank ATMs were being marketed, bank managers were skeptical. They thought that customers preferred human tellers and thought that "people are not going to walk up to a machine and use it".
The fact that people often prefer automated systems is *not* evidence that people don't like talking to or interacting with other people. It's evidence that people would rather deal with a robot than deal with a human in a stylized, scripted, robotic interaction. Efficiency, lack of fuss, predictablity, just getting it done, etc FTW!
So, yeah. Automation threatens some roles previously done by humans. Editor as troll?
tl;dr It's often preferable to interact with real automation than humans playing robots.
Well, 110 baud is 110 bits-per-second. If that's not a unit of measure that involves the binary system,
The earliest VM subsystem that I've looked at is the one for Multics in the 80's. What early systems had VM pages that were not multiples of disk sector size? Just curious.
They already do that precrime stuff with people who possess child porn.
That's not pre-crime. It's crime. Having child porn doesn't just mean the viewer might abuse a child in the future. It also means a child was abused to create the images.
There *is* something magical about a 512 byte or 4096 byte sector size.
It may be possible for a hard disk sector size to be 520 bytes, but it's not convenient nor efficient. What computers do with hard disks at the most basic level is to transfer data between memory and disk. Since computers are binary, it only makes sense that the size of "pages" of memory used in virtual memory schemes be a power of two. It's also much simpler and saner if these in-memory pages are a multiple of the sector size.
Similarly, the nature of digital signaling explains why early networking speeds involved powers of two.
No, phone is not a second factor. It's just a different communication channel to send auth data.
Still can be cloned by an attacker and auth data intercepted.
IOW it's not something you have, it's something that gets sent to you over different channel.
You're arguing that a phone is not a physical object because it can be used to communicate data. By that logic, the RSA secure-id hardware token I have is also not a second facter and would just be a different mechanism to give me auth data that I type into my work's VPN SW in addition to my password. That's bad logic. You're confusing the question of whether or not a phone is an object that may be physically possessed with how that object is used.
As noted at two factor authentication, the three types of factors are knowledge (something you know), possession (something you have), and something you are.
The typical example of "something you know" is, of course, a password.
Both a phone and a physical key are objects. Things you might have. Neither is knowledge. A door key may be used to move tumblers while the phone is used to receive secrets to be echoed back. Note that you're receiving a new secret that you didn't previously *know*. This is quite different than, for example, simply typing in your phone number.
Nor is whether or not something is a physical posession or whether its knowledge related to whether or not it can be compromised. A physical key can be duplicated; a phone can be cloned; either can be stolen. Nor would we bother with using more than one factor if it wasn't possible that factors could be compromised.
However, some attack vectors work by using one type of factor in place of another. For example, say you get the serial number of my hardware token, acquire the algorithm the tokens use to generate changing numbers, and any necessary initial conditions. Some might argue that you're using knowledge to subvert a mechanism designed to require use of a unique physical object. Calling it a virtual duplicate might be more accurate. Still, my physical token remains a physical object; it doesn't disappear in a puff of logic just because you found a way to trick the system is to thinking you had something you don't have.