Yeah they've been really playing that up angle when competing against Google Apps for Business in particular.
That appears to be the argument, yes. The court isn't claiming authority to send police officers to Ireland and physically seize the data, or authority to force Irish police to conduct a search. Instead they're demanding that Microsoft (a U.S.-based company) produce the requested evidence, if indeed its U.S. staff have access to it (which they probably do).
I think it's problematic from a practical perspective, but I could see how someone could reach that conclusion. Usually jurisdiction of U.S. persons does extend to their overseas assets, e.g. in an investigation of fraud a U.S. court can demand that you turn over your Swiss bank account records, even though these accounts are (of course) in Switzerland.
The main problem IMO is that it puts companies operating in multiple jurisdictions in a bit of a bind. For example, Microsoft Ireland may have responsibility under EU law to not release data except in certain cases, while Microsoft U.S. is required to release it, meaning the company will violate the law somewhere no matter what they do. I'm not sure whether it's possible to avoid that by really firewalling the access, e.g. make Microsoft Ireland an operationally separate subsidiary whose servers cannot be directly accessed by Microsoft USA staff. But that would certainly complicate operations in other ways.
Malina is pretty well known in some corners of CS for his work on kinetic sculpture and generative art, and for founding the International Society for the Arts, Sciences and Technology, along with its associated journal Leonardo . But I didn't know he did rockets earlier in his career.
A distribution of the expected returns would be more useful than the mean expected return, which can be dominated by a handful of best-selling titles.
Well mostly I want a good starting point written by someone who's looked into it (like the DJB ones above). That involves: 1) parts that work together and have been successfully used together, preferably under Linux, by someone other than me; and 2) parts that fit at a good point on the price/performance curve.
Sure, I could dive into separate reviews of every individual component and piece them together myself, but that's more research than I want to do.
But I'm thinking of building one again. How exactly does one go about it nowadays? DJB's computer building guides used to be a nice starting point, but he stopped updating them in 2009. Is there something like that, but with current-gen hardware?
(Fwiw, I'm interested more in workstation usage, not a gaming machine.)
When contributors have to invest their credibility in their entries, entries are less likely to be wild untruths.
I'm not sure that's true. There is a lot of total shit in the academic literature, and it's getting worse. And part of the problem is precisely that people's names are attached, so they now have an incentive to game the system. People get promoted based on publications and citation counts. This leads to huge pressure to manufacture them, by any means necessary. There are citation rings out there, people reviewing friends' papers, people falsifying or misconstruing results, etc. Some of them are uncovered, but many aren't. And there are lot more low-level gray-area things going on that are less likely to be uncovered.
Yeah, that's definitely true. A particularly common pattern is that a journalist just cribs something from Wikipedia without researching it, and then Wikipedia cites the news article as if it were an independent source, when in reality it isn't. I'd personally be in favor of tackling this by strongly discouraging the use of news articles as sources, because they typically have extremely poor standards of research. However that leads to other problems, because for contemporary events there is often no other source available, and pushing this too far then runs into the opposite criticism of Wikipedia, that it's too "deletionist". Tricky balance, I think: Wikipedia should cover as much as possible, but should also be as reliable as possible, which are two goals often in conflict.
Especially if you are a professor you should know better. Wikipedia articles cite sources. Well, some of them do. If they don't, you should raise an eyebrow.
If you see a statement in a Wikipedia article that you are thinking of repeating or relying on for something, look first to see: does it cite a source? In this case it did not. In that case, stop here, you should probably not trust the statement. At least not if it's something that matters at all. If it does cite a source, then things are better, but there is still one more step before you should rely on it for anything more than barroom trivia (like, say, publishing an academic paper): you should probably take a glance at that source and see if it really says that.
Incidentally, this will help you use other reference works as well. There are a lot of errors in printed books as well, especially more popular books (those "Who's Who In the Roman World" type books are riddled with incorrect facts). The way to avoid being tripped up by them is to look for references first, and check references second. (How thoroughly to do so of course depends on what you're using the information for.)
...painted a picture of a corporation overrun by the neverending quest for greater profit
Or for short, just "a corporation".
Another one in that regard are the museums that feel they have kind of an "advocacy" role. Like a museum dedicated to the heritage of $ethnicgroup, or to a specific only-slightly-famous painter. They often have a big desire to make their topic more well known, so are more likely to go for the maximum-dissemination route.
The Internet is not powered by experiments on humans. Not even in the DARPA days.
No, websites do NOT experiment on users. Users may experiment on websites, if there's customization, but the rules for good design have not changed either in the past 30 years or the past 3,000. And, to judge from how humans organized carvings and paintings, not the past 30,000 either.
To say that websites experiment on people is tripe. Mouldy tripe. Websites may offer experimental views, surveys on what works, log analysis, etc, but these are statistical experiments on depersonalized aggregate data. Not people.
Experiments on people, especially without consent, is vulgar and wrong. It also doesn't help the website, because knowing what happens doesn't tell you why. Early experiments in AI are littered with extraordinarily bad results for this reason. Assuming you know why, assuming you can casually sketch in the cause merely by knowing one specific effect, is insanity.
Look, I will spell it out to these guys. Stop playing Sherlock Holmes, you only end up looking like Lestrade. Sir Conan Doyle's fictional hero used recursive subdivision, a technique Real Geeks use all the time for everything from decision trees to searching lists. Isolating single factors isn't subdivision because there isn't a single ordered space to subdivide. Scientists mask, yes, but only when dealing with single ordered spaces, and only AFTER producing a hypothesis. And if it involves research on humans, also after filling out a bloody great load of paperwork.
I flat-out refuse to use any website tainted with such puerile nonsense, insofar as I know it to have occurred. No matter how valuable that site may have been, it cannot remain valuable if it is driven by pseudoscience. There's also the matter of respect. If you don't respect me, why should I store any data with you? I can probably do better than most sites out there over a coffee break, so what's in it for me? What's so valuable that I should tolerate being second-class? It had better be damn good.
I'll take a temporary hit on what I can do, if it safeguards my absolute, unconditional control over my virtual persona. And temporary is all it would ever be. There's very little that's truly exclusive and even less that's exclusive and interesting.
The same is true of all users. We don't need any specific website, websites need us. We dictate our own limits, we dictate what safeguards are minimal, we dictate how far a site owner can go. Websites serve their users. They exist only to serve. And unlike with a certain elite class in the Dune series, that's actually true and enforceable.
I see at least three common approaches museums are taking to images of their collections:
1. Maximum lockdown: no photos of the collection on the internet, or at most some very low-res ones on the museum's website. The physical museum itself will typically have anti-photography policies to try to enforce this. The goal is to de facto exercise exclusive rights to reproductions of the work (even where the copyright on the work itself has expired), as a revenue source, through e.g. high-quality art books, licensing of images, etc.
2. Disseminate through museum-owned channels. The museum digitizes its works and makes them available to the general public free of charge, via its own website, in at least fairly high-resolution images, a "virtual collection" that anyone can visit. Third-party dissemination may be possible in certain jurisdictions, but the museum either doesn't encourage or actively discourages it. The goal is to fulfill its public mission of dissemination/education, but while maintaining some control/stewardship of the work even online.
3. Maximum dissemination. The museum digitizes its works and makes them available in as many places as possible under a permissive license: its own website, archival repositories run by nonprofits and state institutions, Wikimedia, archive.org, news agency file-photo catalogues, etc. The goal is to fulfill its public mission of dissemination/education as widely as possible, and perhaps also achieve some advertising for the museum's collections and the works/artists it conserves, by ensuring that its works are the ones most likely to be used as illustrative examples in Wikipedia articles, books, newspaper/magazine articles, etc.
The EU would like to buy American gas rather than Russian, but getting enough LNG infrastructure to replace piped gas is incredibly expensive and not something that can be built quickly.
He didn't get the job done in this case, though. He sent an abusive email about a bug that had already been patched, with a tirade about register spills that aren't even related to the bug.