Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:The pendulum swings too far... (Score 1) 441

I hope you are right and I have missed some factor, but I just don't see how a trillion dollar industry will let itself be "beaten" with prices out of its exact control, just because fracking was able to get more oil on the market than was expected. OPEC controls the vertical and horizontal when it comes to oil prices, and all they have to do is slow down production at their whim, and prices will be back up, if not more. Non-OPEC countries will end up just following, and even if they continue to produce, they don't produce enough to significantly influence the market.

Comment Re:The pendulum swings too far... (Score 4, Interesting) 441

I would tell people to enjoy the oil drop while it lasts. This may be long gone by Memorial Day. Why? A few reasons:

1: China is a very thirsty nation. They are also extremely rich and about to embark on infrastructure improvements that make the US's highway structure look like building a McDonalds. So, the demand for oil will be from them. Yes, US demand is in the 1990s levels... but with China guzzling the oil barrels, total demand is a lot higher.

2: Venezuela leaders and others are in Russia today. People forgot about 1972 and 1973 and the US oil embargo, which destroyed the economy until the 1980s. This can easily happen again. OPEC tends to get the prices it wants, and even though fracking might have increased supply, most of the wells done this way are depleted or near depletion, so the "golden" era of this is ending, especially with states like New York banning it wholesale. So, supply will go back down, and OPEC will ensure it stays down.

3: China is building their own canal across the Americas. This way, they can get their oil from Venezuela a lot more easily, completely bypassing any influence from the US.

4: Congress changed. Already, the solar subsidies are on the chopping block, and in January 2017, it won't be a surprise when the next President yanks the solar panels off the White House. Big Oil is now firmly in control of the US again.

5: The Keystone XL pipeline and a repealing of the ban on selling US oil overseas are pretty much guaranteed to happen. This means that any US oil will be trading at world prices.

6: As always, we are always one incident from price spikes. Should someone have a heart attack at a refinery, prices for crude will be back in the triple digits.

7: Alternative energy has grown, but most people's cars are still fueled by gasoline or diesel. If we had more electric cars, they effectively run on solar, wind, coal, nuclear, geothermal, hydro, or many sources. However, internal combustion engined vehicles require fossil fuel to run, and barring a major battery development, will continue to do so.

To, tl;dr... it is nice to have gas prices as low as they are, but they are going to be back to what they were in 2008, if not to $5-$6 a gallon by the summer. Oil prices are controlled by supply and demand, and demand is high due to a thirsty China, and supply is easily removed from the market.

Comment Re:Two things (Score 2) 403

The ideal is to have the router on its own bare metal, perhaps sitting on a hypervisor (Xen, ESXi, pick your poison), so if the router's VM gets compromised, the bare metal hardware cannot be attacked (video cards can be reflashed, even keyboard firmware can be augmented.) Plus, if snapshots are used, it can be restored from a snapshot if need be. Modern type 1 hypervisors can be well locked down so that compromise from a VM is extremely rare, especially if the management port cannot be touched from any of the VMs on the hypervisor.

Another possibility is to use vSwitches and have your fileserver be a VM, with the PFSense instance being connected to the VSwitch that the external Internet NIC is on, as well as an internal VSwitch for the file server, and the internal LAN. One can get fancy from there, and create three vSwitches so one can have a working DMZ. The advantage of virtualizing everything is that hardware changes are easier, and "oh shit" mistakes can be partially mitigated by wise use of snapshots.

Comment Re:First look at what EFF has to say. (Score 1) 157

That is an OK guide, but I do disagree with the "are past messages secure if keys are stolen." If an attacker gets messages, and then snarfs keys, there is at best obfuscation in place that can protect the messages.

Of course, there can be mechanisms to have keys that are ephemeral, such as having one's main public key be a signing key, which is used in a D-H transaction to generate a temporary set of public/private keys, and when the parties are done with the conversation, dump the temporary keys on their endpoints, making the messages unreadable.

I personally like keeping the encryption process separate from the messaging protocol. Ages ago, PGP Desktop use to be able to sit atop of AIM, MSN, and other chat platforms, offering transparent encryption completely independent from the messaging program. The advantage of this is that one can "pack their own parachute" when it comes to trusting keys, and that it would take companies colluding to push out a ninja update to both steal encryption keys and messages.

Comment Re:"and they may be bought for their assets." (Score 1) 314

RS/Tandy had some absolute gems though. The one thing they had with their machine which no PC has since done was having a usable copy of DOS in ROM.

This is a very simple thing. If a PC had a ROM image of either Linux or a BSD, or even a Windows PE image with recovery tools, it would make life a lot easier for support staff in general. Add hooks for iLO support, and it would be a big asset for IT, even if it is just booting into the recovery OS to wipe the drives to repurpose the box.

For the individual user, having a recovery OS would be extremely useful. First, one can run AV tools to scan and find rootkits. Complete, bare metal backups would be doable. One can do a disk scrub to look for errors without worrying about interfering with what stuff is in use. If a HDD is going bad, and it can't be booted from, one can dd a disk image before the drive completely dies.

I am actually surprised that no modern PC offers this. SSD isn't that expensive, and a recovery image can easily fit on 4-8 GB of space. If a PC can store firmware, it can store an OS recovery image and have it available.

Of course, an ideal would be a recovery image, and another image for reinstalling the OS (or perhaps both in instance, similar to how Solaris 11 ships.) That way, no matter how severe the HDD failure, the machine will always be usable.

Comment Re:Fuck Me (Score 5, Informative) 553

I try to stay out of the systemd fray... but it goes against the core of UNIX... which is the KISS principle.

Init should start tasks, possibly stick them into jails or containers, and set resource limitations. Having something do everything including the kitchen sink is just asking to get hacked down the road unless millions of dollars are spent on source code audits.

As an IT person, results are important. What does systemd provide that previous mechanisms didn't. Parallel startup? I don't boot servers that often where asynchronous startup of processes is a big issue. Resource limits? Doable with the shell script that gets plopped into /etc/rc.d. I'm just not seeing the benefit, but what I am seeing is a gigantic amount of code which touches the entire system, giving me concerns about security and stability, and there have been a number of articles on /. about systemd, to the point where people are even forking distros just so they don't have to deal with it.

Comment Re:My guess (Score 1) 130

It would be nice to see a return to wired networking, just because it is a lot harder to hack (requires physical access), and it is faster. There is no way a Wi-Fi adapter can handle what even an eight port gigE switch can deal with.

Ironically, I'm seeing combined devices with newer SAN offerings. If you have a FC HBA, a CNA card, or even just a plain NIC, the SAN will be happy to do fiber channel, FCoE, iSCSI, NFS, CIFS, or WebDAV, all at the same time. Cutting the cord might be nice for tablets and smartphones, but for real speed, it requires a cord, even if it is a copper wire.

Comment Re:My guess (Score 1) 130

The tablet market is pretty much saturated.

The desktop (as in role... this physical machine can be a laptop, a desktop, a server, or a tablet with a dock like the Surface Pro) machine isn't going anywhere, and has plenty of room to grow.

As for a market, it is actually surprising nobody has made a LAN version of OnLive where the video commands are sent to a rendering server, and streamed video is sent back. This way, each device on the LAN can have a decent framerate for video without needing large amounts of GPU present.

Of course, backups, centralized storage, virtualization, IDS/IPS utility, and many other items have not even been scratched in the home LAN arena, so there is still plenty of room for a company to grow with basic items like that.

Comment Re:Dewhat? (Score 1) 150

This raises a question:

Why do we have these non-standard wireless keyboard protocols that have unknown (if not nonexistant) levels of security, when BlueTooth is a widely accepted standard, and has proven itself quite robust to attack (it isn't perfect, but BT 4.2 is pretty darn secure.)

Why doesn't MS and other keyboard makers bundle a BT dongle ($10 on Amazon), and go with a tried/true standard? If the keyboard supports USB for charging, then pairing is definitely not an issue. If not, it can come pre-paired (similar to how Apple pairs USB mice and keyboards when they are shipped with iMacs), or one can use one of many pairing methods.

Going with BT not just means that there is actual guarenteed security in place, but there are facilities for running at low power levels and not having to maintain a constant radio connection.

Comment Re:Application installers suck. (Score 1) 324

With SSDs becoming more commonplace coupled with filesystem-level deduplication, I wonder if this might be a good thing. Throw not just applications, but multiple instances (browser tabs, for example) into completely separated VMs.

MS has a ways to go to catch up to VMWare, especially with features like transparent page sharing and other memory management techniques that ESXi uses to handle RAM overcommits. However if they can catch up in those departments, it wouldn't be far-fetched to have every simple application instance to have its own OS and filesystem space, and be well secured.

Add a software firewall as a VM (think something like PFSense), and if one of the VMs gets compromised, the amount of damage it can do would be limited.

Comment Re:Application installers suck. (Score 1) 324

Long term, with filesystem level deduplication becoming more common, I wonder if the best thing would be to move back to statically linked executables. With the same code deduplicated by the filesystem, there wouldn't be much need for dynamic linked executables, and even though it may take up a bit more space, it would save on aggravation, version conflicts, and other headaches.

Even non-DLLs can be an issue. For example various applications requiring specific JVM versions. It would be nice to have that built into the program itself, as opposed to having to play "guess that smell" and hope the JVM in use isn't too insecure.

Comment Re:Application installers suck. (Score 1) 324

The ironic thing is that this can be done under Windows. VMWare's ThinApp, and Evalaze are utilities which can take a Windows package and turn the whole thing into a single file. ThinApp could even find the latest update of a packaged application in a share, so if one ran Word, it would execute the latest one.

It takes up disk space, but it would be nice to have Windows offer a completely virtual machine (with virtual FS and Registry) so one could click on an application, and its data would be stored in a part of the user's home directory, completely isolated from other utilities. Of course, there would have to be something put in so an E-mail program could fetch an attachment from the spreadsheet directory, but that is definitely not an impossible task.

Comment Re:Maybe (Score 4, Interesting) 93

Storage is in tiers, and each tier is different. From the stuff in registers to what is stashed on Amazon Glacier, and everything in between (RAM, SSD, HDD, etc.) A revolution at one strata will have a completely different impact than a revolution at another level.

Take RRAM, MRAM, or some random access memory technology which is up to speed with DRAM, except cheaper and doesn't need refreshed. This would end up not just supplanting RAM, but also making inroads on SSD, depending how inexpensive it is. Will this fundamentally change computing? Somewhat, although I doubt that RRAM would ever drop near the price of HDD or even SSD.

Or, take WAN bandwidth. If the average home had terabytes of bandwidth, a phone had the same, this would change things fundamentally. Cloud storage could go from stashing occasional files to being a tier 2 NAS, especially with proper client security and encryption. However, this is extremely unlikely as well.

Perhaps a tape drive company is able to make reliable media with the bit density of hard disk platters, and is able to fit 100 TB on a cartridge for $10, with drives costing $500. Far-fetched, but if this happens, it would have a different impact to computing than memory costing 1/100 of what it does... but it would be significant.

Improvements in the middle tiers may or may not help things. Bigger hard drives will have to deal with currently small I/O pipes, making array rebuild times longer, and forcing businesses to go past RAID 6 to ensure the drives have protection when things get degraded. Already, some arrays can take 24 hours to rebuild from one lost HDD, and if capacity increases without I/O coming with it, we might have to have RAID levels that factor in not just two levels of parity, but three or four, perhaps with another level just for bit rot checking.

So, when someone says that there are storage breakthroughs... it really depends on the tier that the breakthrough happens at.

Comment Re:Doesn't really matter if they do patch it (Score 1) 629

I remember mention way back in the Android 2.2 days about having Android be more modular so that even though a phone may be relatively old, it would still be able to run the latest code.

The lesson to this is to get a device with at the minimum, an unlockable bootloader. That way, even if there are no unofficial patches, one can still find a ROM like CyanogenMod or another party which keeps updated.

Of course, something like the Xposed framework is quite useful as well, especially items like XPrivacy which help with on device security extremely.

Slashdot Top Deals

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...