Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Kontact (Score 1) 44

And there are some great KDE4 apps. But Kontact is not one of them. I anxiously install and run it on every new desktop, thinking "this time, it's going to work." And it never does. Kontact on my opensuse box regularly gets hung trying to open a "choose a file" dialog box (say, if I'm attaching something to an email). I blame its ridiculous database and akonadi semantic crap foundation.

No such issues here. I use Kontact every day at work hooked up to exchange (IMAP + Davmail to hook up to exchange). I have occasionally in the past seen issues with the open file dialog when you have favorites added that are no longer accessible like NFS or SMB shares. I haven't seen that in ages, but maybe that is just because I don't have anything added into my favorites. Running Kubuntu instead of OpenSUSE here.

Akonadi works pretty well now too, but there are some occasions where the IMAP process seems to get stuck doing something or other in its database and it sits at 100% of 1 core of my CPU. I just give it a kick with akonadictl restart and that usually clears it up.

Comment Re:Themostat (Score 1) 139

"install it in 10 minutes and then play" to something somewhat more involved.

Well I usually enjoy an involved setup :) As long as it is stable afterwards of course.

However, those outages tend to be a few hours per year in the places I've lived, and the system comes back from failure without intervention.

Yeah that brings up another issue with programmable thermostats is that when you bring your temp down when you aren't home and then the power fails you are already at that much lower of a temperature to start with. A few hours might not be an issue when your temp is already high, but when you are 10 deg lower to start with and then your furnace fails you have that much less time to respond.

The failure mode is that the nest freezes and you have no indication at all that anything is wrong.

I would have assumed that the Nest would alert you like the Honeywell when it quits checking in to their servers. Are they lacking that feature?

and that Nest refuses to let you manage updates yourself,

Ouch yeah I can certainly see that causing problems.

Comment Re:Themostat (Score 1) 139

If your thermostat flakes out in SVC its no big deal. Very different context than rural North Dakota.

As someone who lives in the ND area this is the exact reason why I did buy a internet enabled thermostat. I got a Honeywell not a Nest (wasn't available yet), and having an internet enabled device is the perfect way to monitor if something does go wrong. I can login to the portal and setup high and low limits, and if my furnace were to fail and the house drops below my set temperature I get an email alerting me to that fact so that I can respond. Also if the thermostat or internet connection fails I get alerts warning me as such so that I can investigate the cause.

When it is -20F and your furnace fails to ignite, or your power is out your mechanical/dumb thermostat isn't going to warn you, and you will still be dealing with burst pipes... if you want a mechanical backup just install a second thermostat and put it on the same control line. I would rather be alerted to the situation than be in the dark, because even if the device fails bypassing it to get some heat is just a matter of bridging two wires.

Comment Re:MS has already done this... (Score 1) 414

It's more than just a shared kernel, the development API's allow you to be hardware agnostic while developing as well as infrastructure in place that allows app to app communication and data sharing.

How is this any different than what Linux and other OSes have been doing for years? I don't seem to recall all Linux programs having to be rewritten for each and every architecture (except for some applications that may try to tie in at a lower level)

From the article

Thanks to the sharing of C and C++ libraries, Direct X components and SQLite support, developers can actually write an app once and move it from one platform to another with only a few code tweaks.

Take out the Direct X part there, and doesn't that sound exactly like Linux?

Sounds like Java..

The article says that developers can move their apps around from platform to platform. This isn't like a java JVM in the middle that will interpret your applications compiled code and run it on every architecture. It is just a single OS with a set of standards and libraries that are available on multiple architectures/platforms. All that Microsoft has done is copied the exact model that other OSes like Linux have been following for many years, there is no innovation here. Of course they could extend this to be more like Java, so that programs could be moved from platform to platform without being recompiled, but I don't think that will fit in very well with the big hitter programming languages.

As a whole these are of course good things, but seriously how has it taken them this long to do this? Maybe they were worried about people recreating their libraries so that these apps could be more easily ported to other OSes.

Start menu is back in 8.1 so the cries have not been ignored.

A button that brings up the metro (or whatever it is called this week) interface is not a start menu. I mean a real legacy start menu like Windows 7 (or older if you prefer). I wouldn't recommend that they allow you to remove metro altogether since they want to get their cut from the app creators, but not giving users an option for what they want no matter how hidden it is to enable it feels like a slap in the face. I could see that if no one had told them that users wouldn't like this before they released they maybe wouldn't have included it right away, but people have been complaining about this since the beta so there is simply no excuse left for them. It is just Microsoft's way of telling us that they know what we want better than we do, and they are using their near monopoly on the PC market to force it on us.

Comment Re:MS has already done this... (Score 1) 414

But Ubuntu and Apple are the innovators... lame...

lol this is Microsoft innovating? So the innovation that your link is talking about is using the (largely?) same kernel on both architectures... Well lets see what architectures Linux shares nearly all of its code across http://en.wikipedia.org/wiki/List_of_Linux_supported_architectures, hmm lot more than 2.

From your link

They OSes even share a substantial chunk of browser code, finally bringing Windows Phone up to parity with its desktop IE progenitors.

Like how Chrome and many other browsers have been using the same browser engines across platforms for years (i.e. Webkit/Blink)?. Hmm so Android has been using much of the same browser code on both desktop, and mobile for some time. Sharing code across architectures/platforms apparently is only new to Microsoft.

Shuttleworth's idea of convergence is hardware convergence. Why have a desktop around when your phone is fast enough to run a full desktop. Just dock your phone and there you have your desktop OS (doesn't have to mean touchscreen UI!), and that way you have all of your data with you all of the time no cloud required (other than for backups?)!

Microsoft's idea of convergence is using the same UI on multiple platforms even if it doesn't make sense for that platform. I understand that they are trying to make an app ecosystem, so they want to push it everywhere, but is it really that hard to at least give the user an option to turn on a normal start menu? Just a checkmark somewhere to turn on a legacy start menu would have completely changed how people viewed Windows 8, but for some reason they refuse.

Comment Re:So... no separation between system and userspac (Score 1) 335

Many VMs run one application, but they typically run many processes. For instance OpenSSH. Say a hacker breaks into your application and has some abilities in the userspace. Now with your reduced separation between the kernelspace and userspace it is presumably easier to exploit the kernel. Once you have your way into the kernel you have the run of the place so you have root access to modify all of the other processes running on the machine.

If you are using a password to login to your system the attacker can get into the OpenSSH service and wait for you to login (giving them your cleartext password). OpenSSH uses a tunnel to encrypt your password from the outside world but it is a tunneled clear text password so once the service itself is exploited you end up just handing it to the hackers. Or on second thought just break your way into PAM and then all applications on the machine start feeding the hacker passwords. Once they have your password they can start to use it elsewhere on your network to see what else they can get into.

External facing machines are often just the start of a hackers goals, and making those external facing machines any easier to compromise for the sake of efficiency is a bad idea.

Comment Re:Kinda late since Intel released Windows game bo (Score 1) 252

The reason a wanted a valve box is to break free of the proprietary xbox sony console paradigm.

What exactly is proprietary about Linux? From the summary link (click on the first circle on the linked page):

Users can alter or replace any part of the software or hardware they want.

capable of running all valve's source games in HD.

It is true that the system won't be able to run all of the Steam library, however if you have another PC in the house that can you can stream it to the SteamBox again from the link:

You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV!

No rebuying anything

You don't have to rebuy games for Linux. Whenever you buy a game on Steam that is Mac/Linux compatible you can run it on all of those platforms for no extra cost.

Comment Re:They've got a good shot at it (Score 1) 252

From the linked page (click on the first circle)

You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV!

May not count as native games for the Steam Box, but this could certainly help. Keep your gaming PC in your office/bedroom/wherever and have the steambox to relay your library to your TV.

Comment Re:How much RAM? (Score 2) 197

According to the Cubic website:

(*) 1000Mbps link is limited to 470Mbps actual bandwidth due to internal chip busses limitation

Sounds like they have the ethernet chipset off of the USB bus on this unit as well. Although I would expect the ethernet to perform better than a Pi since it has more CPU power to handle the overhead.

Comment Re:So, what the hell is Open Stack? (Score 1) 64

Compare that to the weeks and months that marketing, product management, development, QA testers are working on features and it is insignificant.

Where is all of this development and QA testing going on? The point of a cloud is to build and automated image that can be deployed to any of your environments. Presumably you are architecting your applications so that they scale as well which the cloud is excellent for.

For example the dev team is done with their changes and want to push from the dev environment to UAT. Say you have two instances in UAT. instead of shutting down, updating your code, starting up each machine one at a time you just deploy 2 new instances. The deploy script fires off a test or two to make sure that the app started successfully. If you are doing something web related it hits the API on the load balancer to add the new nodes in, maybe fires off a couple of tests against the VIP, removes the old nodes, a couple more tests against the VIP, and just like that your deployment is complete. If you are doing something messaging related where your services communicate over a message bus of some kind presumably your apps just connect themselves. QA can then test, and determine if the code is ready to promote to production.

Of course this all sounds a bit over complicated for a UAT environment, but once you have every single piece of the puzzle automated that is when things get interesting. Lets say that your source control allows you to mark your projects as "ready for UAT", and allows your QAs to mark them as ready for production. You can just have your entire test system rebuild itself every day and production releases once a week through the same process. Also since you are in a position to scale either through your load balancer or a messaging bus you can use the same processes to scale.

Say your monitoring service watches CPU usage on your web app servers, and as soon as you hit 75% it fires up another instance. If load continues to come in after the new node is online then fire up another one. If the CPU drops below what is required to run on n-1 nodes then hit that load balancer API take a node out and destroy it.

Even if you don't need to scale doing deployments often means shutting down apps on one node at a time. If you design everything for n+1 nodes that means that during your deployments you are still running fine, but you are "at risk"

From a sysadmin perspective this is all great because a machine going down means nothing. Your monitoring services see that the VMs are offline, and they fire up another instance wherever there is extra CPU cycles. Without a "cloud" this is easy to do with VMware as long as you have shared storage, but do you trust the VM? It died while the system was running what if something was corrupted? With the cloud you don't care about the old instance, just let it die. Create a new one instead.

Everyone loves VMs due to live migration but why not take that one step further to save yourself some money while you are at it. Skip the extra cards/ports/networks/switches/cables for iSCSI/FC, skip the expensive SAN and pack the hosts with some cheap SSDs. You don't need to migrate when you can just redeploy (although many of the hypervisors support moving images without shared storage now). This way you get local read/write speeds which will probably beat any SAN that you don't have to sell your first born to buy.

Backups also become much easier. Backup the images (which won't change often), and the source control systems, and you are pretty much set (maybe a couple of 1 off monitoring and deployment servers). Why bother with expensive dedup/compression appliances when you just have a few images to backup. DBs are an obvious exception there.

Speaking of DBs lets say you want to rebase one of your lower environments off of a higher one. Lets say you want to refresh the dev DBs with the test DB data. You just hit the api and take a snapshot of the test DB data image. Clone the that snapshot to a new image, shutdown your DB node, unattach the old image, reattach the clone, and start it back up. All of which can be an automated process. No pestering the DBAs to do it for you.

Comment Re:How can the Pi win this award when there are... (Score 2) 91

The Pi is an ARMv6 with a FPU attached, and a decent GPU. Most of the new ARM SBCs out there now are ARMv7 which should be more efficient. Often they have NEON support to help with hardware decoding media http://en.wikipedia.org/wiki/ARM_architecture#Advanced_SIMD_.28NEON.29.

Also Debian and Ubuntu only have official builds for ARMv7. The Debian team rebuilds ARMv6 as a seperate distro (Raspbian) for the Pi.

Many of the other boards are well within $10 of the price of a Pi, but include eMMC flash on the board so no SD card is required to boot the OS making the TCO about the same or cheaper if you are buying fast SD cards to boot quickly on the Pi.

http://stackoverflow.com/questions/3310907/what-are-the-advantages-of-armv7-over-armv6-when-compiling-iphone-apps

Comment Re: Slowaris Delenda Est (Score 1) 154

That would be illegal unless you are using it for development only.

You must accept the OTN License Agreement for Oracle Solaris to download this software. Production use of Oracle Solaris requires a support contract.

You may not:
- use the Programs for your own internal business purposes (other than developing, testing, prototyping and demonstrating your applications) or for any commercial or production purposes;
- remove or modify any program markings or any notice of our proprietary rights;
- make the Programs available in any manner to any third party;
- use the Programs to provide third-party training;
- assign this agreement or give or transfer the Programs or an interest in them to another individual or entity;
- cause or permit reverse engineering (unless required by law for interoperability), disassembly or decompilation of the Programs;
- disclose results of any benchmark test results related to the Programs without our prior consent.

Comment Re:aptitude update (Score 1) 413

'yum shell' has saved me a few times when things were hozed. i.e. a package that was half installed and another package is dependant on the old version of that package (reinstall/downgrade options may be able to do the same). Also works great if you want to replace a package with a different one that provides the same libraries, and the yum remove to get rid of the first one would uninstall all of the dependencies.
yum shell
> remove package-a
> install package-b
> run

'yum localinstall /path/to/package.rpm' is very handy. The easiest way I can find to do the same with apt is
dpkg -i /path/to/package.rpm
(fails due to dependencies missing)
apt-get -f install
(installs dependencies)
dpkg -i /path/to/package.rpm
(to make sure that the package is fully installed)

'yum provides' seems to be fairly superior to apt-file
yum provides */library.so

apt-file works fine, but a) it isn't normally installed and b) the file lists are generated separately from the normal repo, so not every repo owner generates it vs the rpm repos where it is a standard part of the repo.

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...