In a Skinner box, the lab rat pushes a button and gets a food pellet
In a Skinner box, the lab rat pushes a button and gets a food pellet
Thanks for the info. Honestly, I can't remember where I got the 2ms figure [even though it was within a day or so--sigh]. It may have been from somebody else's post, or I misread a wikipedia entry, or another article (e.g. arstechnica, or [gasp] cnn).
This is even better. 1000x faster than flash (at 5us) means 5ns which brings 3D/XPoint into the realm of DRAM [or beyond]. I've done a quick check and at least two different pages peg DRAM at 50ns [don't quote me--obviously
Most DRAM memory systems (e.g. DDR2, DDR3, etc.) take X time to produce the first cache line, but can produce the next N cache lines (from different interleaved banks) in short order as they've been fetched in parallel (they assume a linear access pattern as for a simple loop through an array). This makes it harder to compare apples-to-apples given the sketchy info Intel has released at this point.
I reread the fine print on one of Intel's slides and it just says that XPoint has negligible endurance problems. So, no wear leveling needed. I've refrained from buying laptops that have SSDs (based on flash) because of this. I was an engineer in a company where the marketing dept pushed for flash instead of hard drive (some 10+ years ago). They wanted it because it was sexy and fewer systems arrived at the customer with DOA hard drives due to jostling in shipment. But, these systems would fail at random times due to wearout (despite wear leveling and some additional mitigation we were doing). Eventually, the CEO said to me: "I guess we made a mistake using flash". I said: "Yup, you should have listened to your engineers" [our attitude was flash will be the choice but not now].
Since the cost hinted at [between DRAM and flash] means that XPoint will have a lower cost than DRAM [at higher density]. So, if the access times are at DRAM or better [or even slightly slower], XPoint will make a DRAM replacement. Since DRAM/flash cost differences are something on the order of 10x per bit, if XPoint's cost is closer to flash, it's also a flash replacement, even if it's a bit more expensive.
XPoint seems to be much simpler/smaller in cell design [no transistor in the cell], no complex timing sequences as in DRAM, no complex wear leveling or block access as in flash. The simplicity of designing a system around XPoint can make it very attractive in a variety of use cases. The claim that this will open up design of systems we haven't thought of yet isn't mere hype.
In short, if only half of what they've said about XPoint is true, it is a big deal.
A friend of mine said that W10 will remove apps that are not "W10 compatible". I thought this was an exaggeration but according to http://www.microsoft.com/en-us... it may:
If your antimalware subscription is not current (expired), Windows will uninstall your application and enable Windows Defender.
Some applications that came from your OEM may be removed prior to upgrade.
For certain third party applications, the Get Windows 10 app will scan for application compatibility. If there is a known issue that will prevent the upgrade, you will be notified of the list of applications with known issues. You can choose to accept and the applications will be removed from the system prior to upgrade. Please be sure to copy the list before you accept the removal of the application.
Normally, I'm not overly paranoid, but that last paragraph is a bit troublesome. Is there a list of such incompatible apps? Even though the get W10 app is supposed to flag them ahead of time, I'd be more comforted if there was also a list [that also explained why], in addition to [and before] having to run the probe app.
For example, I've got 5+ years of TurboTax. Each [year's] version does its own update when you invoke it. You need to keep all versions around [just in case you need to look at an older tax form you filed]. If the oldest version was not W10 compatible, would you need to invoke it (under Win7/Win8) to get it to update/upgrade before installing W10?
What about self updating apps in general? Adobe Acrobat Reader and Flash, as well as [yecch] Java come to mind. Or, Firefox, cygwin, vlc, handbrake?
NAND will eventually hit the "die shrink" wall. Since this new Intel memory apparently fits nine cells in the same die area as a NAND cell, it will eventually take over from NAND.
As a side note [to show I'm not totally against NAND flash], a Japanese researcher found, about a year ago, that if you add a heating element to a NAND cell [similar to the one in ferroelectric memory], you can "boil off" the excess trapped charge and eliminate the "wear out". He believed that this was a trivial addition to existing NAND process tech [and could have been done five years earlier] and would take less than a year to enter full production. Note further, that the "boil off" operation only needs to be done periodically, say once every six months or so [it resets the "wear out" cycle].
Intel, historically, charges through the nose for new tech like this [they certainly charged a lot for NAND when it came out], then eventually drops the price. When the 80386 came out, they were charging $750 per chip, even though the chip was designed to be sold at a handsome profit at $35/chip. So, I suspect they will keep the price high, serve the market high end, and then drop the price, increase the speed, etc. if other competing technologies look like they'd overtake it [and when their own NAND factories reach EOL, etc.]
My long term bet is [still] on Hewlett Packards's [or others] memristor memory. Back in Oct 2011, they were planning an SSD replacement within 18 months, followed by a DRAM replacement in another 18, and then on board CPU memory later. They've since dialed that back to 2018 for SSD. See http://www.theregister.co.uk/2... (Also, wikipedia on memristor). They must believe that they can compete effectively with flash, even with projected NAND advances. Meg Whitman recently got a presentation on memristor from the engineering team, and when it was all over, she said to the finance VP [I'm paraphrasing] "Find them whatever money they need".
Disclaimer: I have a special fondness for memristor because the guy who first postulated its existence was Leon Chua [at Perdue]. My EE professor was one of Chua's students [and used to tell amusing anecdotes about Chua].
This memory is byte addressable (e.g. RAM-->random access memory), so no "block erasure" needed in the write cycle as in NAND flash. It's also 1000x faster than NAND flash (at 2ms), so access time should be about 2us, and no wear leveling needed. It also has a higher memory density--9x if you believe the block diagram. It can also be stacked 3D, which, IIRC, flash can't [or hasn't been] up to this point.
There are a number of other non-volatile "solid state" memory technologies in the works: magneto-resistive (memrister) RAM (with an access time of L2 cache), ferroelectric RAM and carbon nanotube memory (with a switching time on the order of picoseconds).
These are a few years off--depending. But, this new memory is slated to go into full production in 2016. Cost is a bit more than DRAM, but less than flash.
With regard to SSDs, we're at a similar point just before core memory got replaced by DRAM, some 30 years ago. It should also be noted that Intel is a major NAND flash developer/manufacturer/proponent, so if they're coming out with this in production volume, it will [quickly] erode the market for their own NAND flash business and they seem to be happy to do this.
SATA 3.2 (aka SATA Express) is a connector change, but is actually PCIe. PCIe is already fast enough. IIRC, Apple hooks up some SSDs directly through PCIe.
And, PCIe can actually go "off board" via a cable (since PCIe is based on separate upstream/downstream lanes and differential line drivers). Also, PCIe 4.0 will have a transfer rate of 31.5 GB/s, yet be fully backward/forward compatible.
Intel already has a CPU package that has two substrates wire bonded together, one for CPU and one for memory. When I saw this, I assumed it would be to accomodate HP's memrister memory. But, now, it's [obviously] been planned for this new type of memory.
Okay, just answered my own question. I also had "ChallengeResponseAuthentication no" in my sshd_config. When I changed this to "yes", I was able to reproduce the bug. In the original article, I had done a
My original slashdot post, with additional security I use and the logging of script kiddies I've been doing for years: http://slashdot.org/comments.p...
The redhat page: https://access.redhat.com/solu...
I just tested this (I've got UsePAM yes in sshd_config) on fedora 21 and I only get three tries before disconnect. So, what's special about freebsd?
I never did post anything back to an ISP. I assumed the result would be what you saw in practice. Also, if it were "state sponsored", they would ignore it. If it were somebody trying to find a portal that would circumvent the "Great Firewall of China" [which I'd be in favor of], posting back might just "out" them [to the government].
I just got sshd patched/reinstalled. I just reverified that it disallows login/pw from public IP but allows login from local LAN on accounts that have no pubkey. So, I opened the firewall for sshd [it had been firewalled for two days]. It took exactly five hours for the first script kiddie to show up.
No, you're not crazy. If you are, then I am, too. People that say that are usually uninformed/unaware of what truly constitutes good security. IMO, security is relative to what you're trying to protect. Good security should be minimally intrusive to authorized users. People who bandy about the "crazy card" are most likely to implement systems that regular users try to circumvent (e.g. mandating a 30 character password with funky chars will just cause users to put the password on post-it notes). Note that for website logins, I use a different login for each site, and different funky password. Most of the time, the browser password manager takes care of the pain.
I have [being a systems/kernel programmer] have worked on some "security" projects, and some of the people I worked with were "crazy". By that I mean, they locked down the development environment to the point where it was almost unusable and productivity suffered. In addition to genuine security, they also subscribed to the "security through obscurity" doctrine. This seems to be typical, based on my experience, and what I've read about what Linus [Torvalds] has to say about them.
OTOH, I worked on a realtime broadcast quality realtime H.264 encoder. While everybody had a personal login, the lab encoders' root password were "password". We made this decision from day one that the test encoders were "test equipment", just like an oscilloscope. This was fine, because the entire lab subnet was triple firewalled and even if somebody had logged into root on the encoder, it would let them roach it, but not get access to anything that mattered like the CVS server, etc.
Here's a different type of "crazy"
Ironically, the only place where we had to use high security was in product shipments to our principal customer. Updates had both software changes and firmware changes [to custom hardware], which were QA'ed as a unit. But, this customer felt that software updates were okay, but that firmware updates were too "risky" [and that they knew better than we did]. So, they would apply the software changes but not the firmware ones, and then complain to customer support that "things were broken".
We were providing "enterprise grade" customer support [including onsite visits] and even after telling the customer to update the firmware they wouldn't do it. To solve this, we [engineering] made it [had to make it] impossible to do a piecemeal upgrade [with a nearly impossible to remember root password and disabling any override to the boot process].
Also, we had a rev numbering scheme that was X.Y.Z where Z was for simple/minor bug fixes. That same customer balked, thinking any change to Z was "a major change" [based on number of "dots"]. We solved this by shipping them the revs as 1.X.Y.Z and they were happy once again [blissfully unaware].
I'm probably going to be labelled crazy for what I say below. It's a rant about selinux in "targeted" mode, so you can skip it if you want.
selinux was designed [by the NSA] to provide security for gov't systems that have multiple levels and classifications. Confidential, secret, top secret, most secret, etc. And, need to know classifications like "noforn" [no foreign], "five eyes" [US, Canada, England, Australia, ???], etc. This is useful. An example would be applying this to the FBI. Not every FBI agent has need-to-know about every ongoing investigation.
But, nobody would use that stuff outside a government. So, selinux has the targeted mode. It is supposed to prevent access to things that can't be codified in ordinary file rwx permissions [owner, group, other] or ACLs. It is proffered as "you get better protection", but the real reason is to justify its inclusion in the mainline kernel.
But, selinux in targeted mode is: dumb, annoying, useless. For example, it has a specific rule to deny
During my recent fedora upgrade [done via "fedup"], after reboot into the "install mode", selinux was there, but it was run in "permissive" mode. It was complaining at various points, but still allowed the action. To me, this is a scathing indictment if something like fedup feels the need to tell selinux to [effectively] STFU.
Honestly, I've never [personally] seen a single targeted mode selinux denial that wasn't a false positive [and couldn't be covered just as well with standard POSIX permissions or ACLs].
How about you?
Once again, we seem to be in complete agreement. I did the enhanced logging for amusement [That's why the logger never did a fail2ban equivalent]. Sometimes, I do "tail -f logfile" to watch the fun in realtime.
For a while, I've been considering paring down and packaging up my scripting environment for this and publishing it on github. The sshd patch and setup/modification of the config files [including changing the selinux attributes
The only wrinkle is that all users have to have set things up to use pubkey via ssh-keygen. For example, the public keys for my laptop and smartphone are entered into my
My desktop system uses two dictionary words for the password to my personal account and root account. I've grepped the log, and the kiddies never even came close. However, because I am using these words, that's why I added pubkey only for ssh access--just to be safe.
I had to firewall ssh because I just went from fedora 20 to 21 and would have been running an unpatched sshd. I just completed a reposync, so now I have the correct openssh sources and can rebuild/reactivate
Interestingly, although the kiddie attacks can come from anywhere in the world, they are predominantly from China. The whois info for non-Chinese IP's is somewhat spotty, but the ones in China have full/accurate information. Seems like the Chinese government wants to track everything back to a name.
I was considering adding automatic whois lookup, with firstname.lastname@example.org scraping, and then send the applicable part of the logs automatically [with a copy to the FBI
Your data correlates with mine and I've been logging for years [I have 450,000 log entries at present and I have a non-published IP address, not tied to any DNS, so my traffic will be lower--just so I can login to my desktop from Starbuck's using my laptop]. More on this logger and my security config below.
Apparently, the keyboard interactive problem has been known [by Redhat] since at least July 2013, see https://access.redhat.com/solu... and it sets ChallengeResponseAuthentication to "no" to specifically disable keyboard interactive.
I added a line to
I've also use
Thus, ssh can only use pubkey authentication, so even if a valid login/pw combo is presented, it will fail.
From what I've seen in the logs, it isn't just common/simple passwords that get tried. It becomes obvious that some systems have been hacked, the
This actually provides a signature of the attacker that can be tracked. It appears there is some black market for these databases as they're too specific to be just "let's come up with a list of most probable common passwords". They're hoping that person A (using password B) created a login on system C and the person reused the login/pw on other systems (e.g. D)
The [Chinese] script kiddies are getting dumber [or smarter]. My logger used to do random delay of up to 40 seconds. This slowed them down and because they can only attack so many systems in parallel, this helped the victim community at large. It also prevented them from trying thousands of passwords/second on my system [which they did by having hundreds of separate ssh sessions].
Eventually, the "replay" list gets exhausted and the attacker moves on [possibly showing up years later, sometimes from a different IP address]. But, lately, if the delay is over a certain amount, the request gets timed out by the attacker and they will repeat the same login/pw in an infinite loop. This prevents them from progressing through their list, but it also means they will never stop hammering my system [because the list never gets exhausted]. So, now, I've set the delay to a smaller value, that still delays, but doesn't trigger the infinite loop.
I'm not the AC, but I'll try to share the knowledge.
I'm a kernel programmer and worked on a Linux based realtime highdef broadcast quality H.264 video encoder that used a hybrid mix of multiple cores and FPGAs, so I'm fairly familiar with at least one use case.
openMP has uses for parallelizing workloads via pragmas in the compiler code. That is, take an app that is designed for a single CPU, add some pragmas and some openMP calls and let the compiler parallelize it. It does this [mostly] by paralleling loops that it finds.
Parallelizing [simple] loops can be done in [at least] two ways:
(1) A single loop can be parallelized across multiple cores
(2) If a function does loop A followed by loop B and loop A and B share no data, they can be done in parallel.
openMP assumes a shared memory architecture (e.g. all cores are on the same motherboard). Contrast this to MPI that can go "off board" [via a network link]. There are hybrid implementations that use both in a complementary fashion.
A good use case for this is weather prediction/simulation which is highly compute intensive but doesn't have realtime requirements. We just want our final answer ASAP, but what the program does moment-to-moment doesn't matter. Another use case is protein folding.
But, neither openMP nor MPI is well suited to a realtime situation that requires precise control over latency. Also, openMP doesn't support compare-and-swap. And, it's prone to race conditions.
Ideally, designing a given app from the ground up for parallelism is a better choice. If one does that, the fanciness of openMP isn't required. My last implementation of an openMP equivalent [that also incorporated what MPI does] was ~1000 lines of code because the app was pre-split into threads set up in a pipeline. It supported a multi-master, distributed, map/reduce equivalent using worker threads [still within 1000 lines].
Consider the second loop parallelization case. It's easy enough for a programmer to see that loop A and loop B are disjoint and put them in separate threads (e.g. A and B). But, if one is aware of this, the splitup can be done even if loop A and B share some data because one can control the synchronization between threads precisely. Extend this to 40-50 threads that have a more complex dependency graph.
Note that latency means that a given thread A will deliver its results to thread B in a finite/precise/predictable/repeatable amount of time. In video processing, each stage must finish processing within a the allotted for a video frame [usually 1/30th of a second]. With extra buffering, that can be relaxed a bit, but the average must be 1/30th and can't vary too widely (e.g. no frame could take [say] 1/2 second).
Thus, the AC, although snide, is partially right. If I were doing an implementation, I believe the result would be better not using openMP. But, I've got 40+ years doing realtime systems. Not everybody does. Most consumers of openMP [and/or MPI] are usually scientists/researchers who are [no doubt] experts in their field, but they're usually not expert level programmers. And, they usually don't have the restrictions imposed by a realtime system. Notable exceptions: programming for MRI/PET/etc machines.
afaik, supervisor mode wasnt added until 68030 or 40?
No, the mc68000 always had supervisor/user mode [I was the chief systems programmer for a startup company that designed/manufactured/sold 68000 microprocessor systems and I'm quite familiar with it]. It also had an external MMU chip, which was almost unusable in practical systems [you couldn't use just one--you needed many of them]. Most companies [mine and others (including Sun)] developed their own MMUs from FPGAs.
It had a 16 bit physical data buses, but logically [how a programmer saw it] was 32 bits. It had 8 data registers and 8 address registers. The address registers were 32 bit, but only the lower 24 bits were used [just like the IBM 370].
You might be thinking of a virtual memory capable MMU, which was available as an external chip for the 68020 and integrated on die in the 68030. Note that while the 68010 is listed as having virtual memory support [via restartable instructions], it really couldn't be used easily for virtual memory.
The 68000 was one of the first 32 bit architecture chips, along with the IBM 370 [mainframe] and the VAX. At the time, the 68000 was vastly superior technically/architecturally to the 16 bit Intel 8086. Intel realized this and initiated a marketing blitz that won the day. This is chronicled in Regis McKenna's book "The Regis Touch".
Why not just stop buying ANYTHING then?
It would require a majority of some sort. Say 60% to start a boycott. And, like whitehouse.gov/change.org [and I forgot moveon.org], one endorses an action that they themselves will take. Others are free to follow or not. And, since a [detailed] explanation for the boycott must be provided [which can be fact checked], this helps limit the "fanboy factor".
Also, if this really took off, people would use their votes [more] responsibly, because it's a double edged sword. You may vote for a boycott of product X [and you may get it]. But, your favorite product Y may become boycotted [possibly without merit]. Once the latter happens, you will learn to use your votes responsibly.
And, I think you missed the point about a limited time boycott. It could be 3 months, 6 months, 1 year, etc. That's enough for the corp to feel some pain but it's not permanent. Also, it doesn't preclude individuals from buying anyway (e.g. Maybe there's a boycott on Mattel, but it's Christmastime and your daughter will be heartbroken if she doesn't get a Barbie doll. Even I wouldn't argue against that one.)
That's the problem - "the wisdom of the masses" is really quite dumb.
Well, yes and no.
Yes explains why Donald Trump gets any press at all. [Side note/disclaimer: I'm a Democrat and disagreed with most (but not all) of John McCain's political positions, but I've never questioned his patriotism, his valor, or heroism--being tortured for five years in the Hanoi Hilton and living to tell about it]. It disappoints me that Trump seems to be getting any traction for these egregious statements of his. In this instance, the "wisdom of the masses" really is quite dumb.
But, no. Google Play's ratings are usually in the ballpark. I have an Android phone and now I don't download anything with a rating less than 3. That's because I used to and I was uninstalling within 2-3 minutes.
Also, I used to subscribe to the "yes" notion [the masses must be wrong], so in their respective heydays, I skipped over "The Beatles" and "Abba". I decided to revisit along the way. Now, I'm a fan of both.
I think Abe Lincoln said it best: "You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time."
Isn't that something? It should be easy enough to check for, yet buffer overflows are still very common.
Microsoft came up with an API to handle buffer overflows that take buffer descriptors [that have base/end/length] instead of mere pointers (e.g. memcpy --> memcpy_safe).
But, trying to retrofit that over a code base of tens of millions of lines of code isn't easy and has it's own set of problems for QA'ing the result. For example, suppose you do a retrofit for certain code sections, do a full QA. You may break every system in the world because your QA suite missed something. With Win10, hopefully, automatic rollback on recent changes will be part of the newer "continuous update" model. With that, the risk of adding some additional checking will be smaller, so MS will be encouraged to do more code review and cleanup.
Further, WinX, by architectural design and needless complexity, has many more avenues of attack than Unix/Linux/*BSD POSIX systems. Buffer overflow is but one, and it's the easiest to spot in a code review.
Case in point: Stuxnet
Before getting to the centrifuge controllers, stuxnet had to penetrate windows. It did so by putting attack code in a printer font. The WinX print spooler [inside the kernel] executed code in user space memory from ring 0. This is bad design for two reasons:
(1) putting a print spooler in the kernel at all [on all other above systems, the spooler is just a utility].
(2) Executing any code from user space memory by the kernel running at ring 0 [This is architecturally impossible by the other OSes]
This is [very old] legacy code from the MS/DOS days when there was no supervisor/user mode distinction [on an 8086]. In other words, they never bothered to change this in 20+ years. Contrast this to the fact that most Unixes back then used mc68000's which came out at the same time and did have supervisor/user modes baked into the hardware. None of the POSIX based systems have any way at all for the kernel to do what WinX was doing [the calldown to user space].