Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Can someone explain this guy's logic to me (Score 1, Troll) 367

From the article: The monthly fee, which would pay for distribution and transmission of energy

So no, it is exactly as I described it. And what you suggest is exactly what they want to do, charge a monthly fee to be connected to the grid. There is a small difference though between your plan and theirs. Under their plan if you buy enough power from them they waive the fee because they figure they got enough money to cover the grid costs out of the combined power+transmission cost per kWh they generally charge for power.

Comment Re:Can someone explain this guy's logic to me (Score 4, Informative) 367

Actually no, it's about simple accounting for resources. It costs money to maintain the electric grid. There are two basic costs involved for you to receive power: 1) Cost of generating the power, 2) cost of transmitting that power. Ordinarily when you buy power from the power company they roll these together and charge you per kWh.

When you have your own on-site generation you have 3 basic states of use: 1) Using some amount of power from the grid, 2) using zero power from the grid, 3) putting power back into the grid. For state 1 and 2 you are simply charged for electricity as per usual. It's state 3 that's problematic.

The problem is that many people naively expect to get paid the same rate for energy they put back into the grid as energy they took from the grid. But the rate they paid to take energy from the grid was generation plus transmission. If the rate they are paid to put energy back into the grid is the same rate, e.g. "running the meter backwards" then they are effectively being paid for stealing.

The ideal fix for this is to have two meters. One for inbound power usage and one for outbound power supply. The customer would then have to pay for inbound usage at the normal rate and would be paid for supplying power at a reduced rate. That is, they would be paid for generation of the power but would not be paid for transmission of it because they did not themselves pay for transmission.

In lieu of this, the power company has found it easier to simply charge a connection fee to pay for this transmission. It looks bad to someone who is ignorant of the mechanics of power transmission and it doesn't seem particularly fair because it's apparently a flat fee that will be charged based on how much the company estimates the customer is using the grid to transmit power.

That said it is still more fair than what they are doing now which seems to be paying the customers who put power back into the grid for not only the generation, which they did provide, but also for the transmission, which they did not. The money has to come from somewhere and that somewhere is the companies bottom line. So the company will eventually petition to have the electricity rates raised to cover this cost which means that everybody else will have to pay more because some people think its cool that their meter actually runs backwards.

It's really not that difficult to understand. The problem here is that the reporter didn't check her facts or use logic or reason. Instead it's a he-said he-said story between the underdogs and the big bad evil corporation. She mentions in the article that she pressed the power company spokesman and got him to admit that "currently, no Xcel electric customers pay extra to fund solar connectivity fees. In reality, Xcel absorbs those fees." Then she goes on to say that "The money from the proposed fee would not go into the pockets of electric customers, but would go back to Xcel." This is true but no where near the whole story. Xcel has a fiduciary responsibility to account for resources used. Right now Xcel's resources are being used without payment and actually worse than that Xcel is actually paying someone else to use their resources. That is an untenable situation which can only be resolved by charging someone for it. This can be done by either correctly charging the customers who use these resources or, if this fails, by raising the rates for everyone. There are no other options. But Christin did not bother to point out the obvious here.

Comment Re:Read about this yesterday (Score 1) 254

Seems like the best solution for those who don't receive many SMS messages is to restrict SMS messaging to your phone. To do this, call AT&T at 1-800-331-0500 (or 611 from your phone), and ask to restrict text messages. According to the rep I spoke with just now, only AT&T's (free) text messages regarding changes in service, firmware upgrade info or plan info (e.g., how many minutes left or your bill) will go through.

Thanks. I just got my iPhone the other day and I didn't sign up for any SMS plans since I don't use SMS. I just called 611 from the phone and it was no trouble at all to get the plan changed from a la carte text messaging (at an outrageous 20 cents a text, even for incoming) to text messaging restricted. Now I don't have to worry about someone sending me a text message and getting charged for it and hopefully I don't have to worry about this bug since it will presumably be blocked at the server.

Thanks again.

Comment Re:Just like Linux (Score 3, Interesting) 391

Funny enough a few months back I made a very similar error if not the exact same error while coding on the bootloader for Darwin/x86. Except in my case it wasn't exactly a true error because in the bootloader I know that a page zero dereference isn't going to fault the machine but will instead just read something out of the IVT.

So as I recall it seemed perfectly reasonable to go ahead and initialize a local variable with the contents of something in the zero page and then check for null and end the function. But GCC had other ideas. It assumed that because I had dereferenced the pointer a few lines above that the pointer must not be NULL so it just stripped my NULL check out completely. Had it warned about this like "warning: pointer is dereferenced before checking for NULL, removing NULL check" then that would have been great. But there was no warning so I wound up sitting in GDB (via VMware debug stub) stepping through the code then looking at the disassembly until I realized that.. oops.. the compiler assumed that this code would never be reached because in user-land it would have segfaulted 4 lines ago if the pointer was indeed NULL.

Obviously the fix is simple. Declare the variable but don't initialize it at that time. Do the null check and return if null. Then initialize the variable. If using C99 or C++ then you can actually defer the local variable declaration until after you've done the NULL check which IMO is preferable. It may be that the guy wrote it as C99 (where you can do this) then went oops, the compiler won't accept that in older C and simply moved the declaration and initialization statement up to the top of the function instead of splitting the declaration from the initialization. My recollection of how I managed to introduce this bug myself is shady but as I recall it was something like that.

Comment Re:Yeah... (Score 1) 1057

I don't know the scientific community was pretty adamant in its consensus against the Bush administration on this.

More the case that the bulk of the media was adamant in its consensus against the Bush administration and made sure to put only the scientists on the air who would say things against the Bush administration. To appear to be fair they'd occasionally put republican pundits on to counter the scientists. But they wouldn't usually put a scientist on with the view that climate change might not fully be explained as man made. And if they did they'd simply accuse him of being a republican shill and make all sorts of specious arguments to discredit him in the eyes of their viewers.

This guy sounds like a holdover from individuals hired by the previous administration to refute the rest of the scientific community.

If you FTFL (followed the fucking link) you'd see this: Senior Economist, U.S. Environmental Protection Agency, Washington, DC, 1971 to present

So here's a guy with a masters in physics and a PhD in economics who's been working for the EPA since the Nixon administration. But, oh, wait. He isn't an "environmental scientist" so his opinion doesn't matter. That's the best trick yet that's been used to discredit everyone who doesn't toe the party line that climate change is fully or mostly caused by man and that we must take drastic action to attempt to reverse it.

The rub I have with that trick is that environmental science seems to have as one of its axioms the presupposition that climate change is man made. That is to say that environmental scientists presume that statement to be true without having to back it up. Then they focus on researching ways man can change his behavior to have less of an impact on the environment.

Saying that environmental scientists have formed a consensus that man is causing global warming is like saying that cattle ranchers have formed a consensus that beef is the best meat.

The other thing I hate about this whole debate is that ultimately it is not one of science at all. The question is not "does man have an effect on the earth" because the answer to that question is undoubtedly a resounding YES. The act of you simply breathing has an effect on the earth. So we can get more specific and ask questions like "is man directly responsible for rising global temperatures?" and "are we going to cause the planet to become uninhabitable?" and "if so, how long do we have?".

The answers to those questions are a lot murkier and there has been a fair amount of bogus research out there. One great example is the whole "hockey stick graph." Intended to show how much more temperatures have risen in the 20th century when compared with prior centuries it instead showed the result the "scientist" expected. It showed that result because he explicitly coded the program and input data to it based on the assumptions he had been making. The resulting visualization was, of course, exactly what he expected to see. Garbage in, garbage out.

So what we have is a feedback loop where the environmental scientists are all doing research from their assumptions and from past assumptions. There is very little truly "hard data" available in this field. That is simply due to the nature of it. We did not record temperatures until relatively recently. We did not look at what the polar ice caps were doing until relatively recently. For all we know, the temperatures might have risen and fallen on cycles for years. For all we know, the polar ice caps have been growing and shrinking for years.

In lieu of hard data, environmental science tries to come up with methods of interpolating this data based on other observations that were recorded or based on archeology that we can do now. But we can't test what the temperature of something was so we have to try to count the size of tree rings and then try to write a formula that will relate tree rings to what the temperature probably was. But even then there's a shit ton of other variables going in to how much a tree grew during a given season. And worst of all, the formulas they use to make these calculations are written by the people who want to see the result that temperatures were lower and more steady in past centuries.

So all of the interpolated data they use is based on their own assumptions of how it should be interpolated. The assumptions of people with a vested interest in claiming that man-made global warming is occurring.

Thus the answers to the question of how much of an effect are we having on the environment is very very difficult to answer and the only "consensus" is from a field of scientists whose field it is to form a consensus and work from there. Therefore, there is effectively no real consensus at all, just the assumptions made by these people. And as to the questions of how long do we have you're dealing with something where you have to predict the future and try to come up with ways to model it hoping that your assumptions are not wrong. And again you have a situation where there is consensus by design because the environmental scientists aren't questioning the assumptions of each other.

And as we can clearly see here, when any scientist dares question the assumptions he is attacked for it, particularly when it is in the realm of politics. And the reason for this is that the politicians don't care what the true answers are. They don't want real research with all of its murkiness and footnotes. They want a group of people who will unequivocally say that man is causing severe climate change and that we must stop it at all costs. And they want it precisely because of the "at all costs" part because that gives the politicians more power. Fear is a great tool of the politician.

Comment Re:You know it's bad when (Score 1) 137

I dunno about you, but installing XP on new standalone hardware (using our legacy VLK licences) is a royal pain in the bum these days. Needing a floppy disk to install the SATA drivers, or patching the OS ISO, futzing around trying to find compatible sound card drivers, wireless network card drivers, the multitude of patches (thank $deity for SP3 rollup, it was getting rediculous post-patching SP2 even with WSUS).

It's only hard if you're actually using CDs and F6 floppies. You can ditch the CD by using Windows Deployment Services (WDS) in Legacy (i.e. RIS) mode. There is even a thing called binlsrv.py for Linux systems that will run on port 4011 and speak the BINL protocol (among others) so you can run this on a Linux server along with tftpd and Samba.

If you put it in OEM mode it'll even copy entire folders onto the C: drive during text-mode setup. So what I do is I have it copy a Drivers folder to C:\ with the common chipset, disk, and ethernet drivers. I have a few video drivers (like Intel GMA) in there as well.

The only catch is that when you put it in OEM mode (this is a setting in i386\Templates\winnt.sif) you lose the ability to do recovery-mode setup and you lose the ability to use F6 floppies. The workaround to this is that you edit txtsetup.sif and include your "RAID" driver. Typically these days you're talking about Intel, AMD/ATI, and nVidia AHCI controllers. Most of our machines are using Intel but the other day I ran into a Toshiba laptop with AMD. Once I was able to figure out how to download the AHCI driver (AMD's site is crap!) I was able to integrate it in just like the Intel one.

The only gotcha here is that I only modify TXTSETUP.SIF a very small amount so that the text-mode setup can use the driver and it can put it in the start services list for GUI mode setup. There are ways to make it also install it (as if it were part of Windows) but I eschew this in favor of simply making sure the driver is in a subfolder of the C:\drivers dir which is listed in the OemPnpDriversPath.

So what happens in this case is that net booting to text-mode setup uses the BINL server to figure out which ethernet driver .sys file must be loaded then TFTP loads TXTSETUP.SIF which tells it what drivers to load (including all disk drivers built in to Windows). Once it has TFTP loaded everything it starts the NT kernel at which point it is able to use SMB because it has started the correct network driver. Assuming your Windows setup files have the disk driver (i.e. from Microsoft or you edited TXTSETUP.SIF to add it) then it will also have the disk driver and will show you the partitions on your disk.

Basically it's like any other XP setup where it loads a bunch of stuff onto the HD during the text-mode phase then reboots. At this point the system is kind of installed. That is, it booted from C:\WINDOWS using the normal NTLDR boot process. However, it is only using whatever drivers text-mode setup told it to start which is really only the base drivers and the disk driver (in particular, not the ethernet driver yet). The first thing it does is do the hardware detection. This is where the magic of OemPnPDriversPath comes in because in addition to using all of the built-in drivers it also consults all the drivers in this path you specify.

Assuming you made sure at least your ethernet and disk drivers were in there then when you finish setup you'll have enough drivers to do a windows update which will often get you a lot of the extra drivers. And if not, you're on the internet so it's not that hard to go find them.

As of now, the chipset, ethernet, and AHCI drivers are very easy to come by for XP. So XP isn't "dead" by any means. In fact, MOST of the decent OEMs (e.g. HP/Compaq, Lenovo) are still supporting XP, particularly for their business-oriented lines.

Where you run into trouble are things like Toshiba laptops. Toshiba has always had IMO a rather odd way to do their platform drivers and their new models have the drivers buried in some sort of huge "value-add" package that only installs on Vista. I am not sure if it's possible to dig out the drivers but basically you lose all the special keys and the volume control without them. If you can live without these then you can keep running XP and avoid the mess that is Vista.

The worst part about it is that Vista isn't even all that bad. I run it on my work laptop (Apple MBP) just fine. But in that case I am using a full copy of it, not some copy with OEM crapware.

Comment Re:Ridiculous (Score 1) 344

It's the ORM layer that's the real pain in the arse (assuming you're using OOD, and assuming you actually want a direct mapping between your object model and relational model). Things like Hibernate and judicious use of code generate make it a lot easier, but you still need to know what's going on and you still need to (and can!) choose between navigating among objects (letting the ORM do the queries) and generating a hand-written query. To some extent an ORM (and the RDBMS vs OODBMS choice) is just a reflection of the different requirements of on-disk vs in-memory representations of objects. On-disk storage is all about efficient and flexible querying, retrieval, (distributed) concurrency, storage and management of huge data-sets, whereas in-memory storage is all about assigning behaviour and navigating relationships between smaller sets of objects whilst carrying out that behaviour.

Well there's your problem. Hibernate. Hibernate basically just pulls tuples in to objects but doesn't really do a very good job of managing the object graph. My experience with it has been that users of Hibernate have to actually write the code that pulls in related objects. So your customers table can have a getInvoices() method but at some point you actually have to write how you want to retrieve the invoices given a customer. In the end you also seem to have this situation where you call save on a "root" object and it saves that object and any objects related to that object recursively. But.. that's not relational. That's hierarchical. FAIL.

A more fully-featured design has you describe the tables and their relationships in some sort of a data file. It could be one giant XML file or a number of XML files (like one for each Entity/Table) or even something simpler like a plist file (just a serialized hierarchical key/value dictionary).

Probably the best example of this is the Enterprise Objects Framework (EOF) component of WebObjects. Everything you do is done through an "editing context" that is somewhat akin to a database transaction. Typically you pull in your "root" objects by asking the editing context to do a query for you. From then on out you just ask your customer object for invoices (to-many) or for account manager (to-one) or whatever. The editing context records all of the changes you make relative to what was fetched from the database and when it comes time to save them you ask the EC to save and it figures it all out no matter how complex your object graph is.

More advanced usage allows for the ability to fake attributes and even relationships giving you attributes that are actually sums or averages or whatever of some related data.

Apple themselves took this concept, pared it down, and brought it back from Java to Objective-C as "Core Data". By pared down I mean it's hardwired to use a SQLite store vs. any old SQL database. Well there is an XML store as well but we won't get into that here.

Other similar options include Apache Cayenne (Java) and Telerik ORM (.NET). Those I have not explored as fully as WebObjects but at the basic level they seem to be structured in much the same way. The one thing I haven't yet figured out with those is how you would go about doing some of the derived attribute stuff I did in a fairly large WO project. Basically what I had in WO was a mapping between three very different database servers that all stored somewhat related data. But there were some caveats like one database would store a compound key like 'XXXX','YYY' and another would store one column 'XXXX-YYY'. EOF, if you knew what you were doing, would handle this effortlessly and you could actually join across these without problem. For instance, in the table with the compound key you could define an attribute newStyleNumber with a read format of (column1 || '-' || column2) and it would handle it (mind the PostgreSQL string concatenation there). Obviously I'm now breaking the "pureness" of the ORM but the point was that you wouldn't actually then use that attribute in code but it was a slick way to pass more or less raw SQL to the DB so it could now do a join with that. Sometimes you simply cannot change the underlying data you have and being able to actually use little SQL snippets while still letting the framework do the heavy lifting is a huge win.

Anyway, the point of all this is that the RDBMS don't suck. What sucks is how people are accessing them. Basically treating it as raw collections of tuples. Sure, that's what they are, but that's not usually how you as the programmer want to view them. Doing it that way you wind up having to figure out all of the joins yourself in the middle of your business logic code which is just crap. You shouldn't have to be writing that stuff or thinking that stuff mid-stream.

Comment Re:Really a surprise? (Score 5, Informative) 493

That's way off base. There are no context switches when making a library call. Context switches occur when you ask the kernel to do something by making a syscall. So memcpy or memcmp don't incur a context switch. Nor do fopen or fread in and of themselves cause context switches. But one will occur when the underlying open and read calls are made.

What's really needed here is a profiler to find where the code is spending the bulk of its time. My guess is that it's a compiler issue. And other comments about the windows build using profile guided optimization tell me my guess is probably right.

Comment Re:In Proof Of Stupid, Look No Further (Score 1) 648

The two 32-byte AES-256 keys concatenate to form a 64-character string that begins with "ourhardwork" and ends with "(c)AppleComputerInc". So to me it seems like there might in fact be a copyrightable expression of an idea here since the keys are english text that tells you what it's for and asks you not to steal. I've seen poems with less letters that are copyrighted.

I think most people don't realize what the keys are when they make the claim that the keys themselves cannot be copyrighted. Clearly the keys aren't a random computer-generated (and thus not copyrightable) set of numbers. They are a human written string of text.

So now even if you completely ignore the DMCA you still have the fact that in order to make OS X work on a PC you must by necessity copy those keys from a Mac onto the PC. There is no getting around this. If those keys are considered a copyrighted work (and why wouldn't they be?) then you just violated basic well-established copyright law.

So that is what I meant when I said that Apple is setting up a legal minefield. Even if Psystar gets past the DMCA portion and the replacement for "Dont Steal Mac OS X" isn't considered a DMCA circumvention device, it could still be considered a violation of plain old copyright.

Comment Re:In Proof Of Stupid, Look No Further (Score 2, Informative) 648

I assure you they are not trying to claim the bootloader does any checks since they are using my bootloader and not Apple's boot.efi.

I think what they are trying to claim is that Apple's kernel startup routine blocks certain machines. And this is true. It blocks any CPU that is not family 6 and I think also checks for certain models (like 14 and 15 which are Core and Core 2). Beyond that it also checks for LAPIC version which if they actually were to enforce it would really fuck with running OS X under VMware.

Of course the problem with Psystar's argument is that Apple is checking for these things because you need this information to properly initialize things for the processor. Apple can easily argue that they only bother checking for CPUs that they use in their machines because they have no reason to explicitly support anything else. And it would require at least some small amount of explicit support.

The flip side of this is that for the last few OS X point releases, Apple has finally got someone dedicated to doing the code releases and the equivalent Darwin xnu kernel source is coming out like the next day after Apple pushes out the update. It takes like 5 minutes to apply your patch to the new version since the startup code doesn't change very much. Then it takes like 10-20 minutes to build the new kernel.

Of course none of this has anything to do with what you were talking about which is the actual check for Mac hardware. That is separated into its own kernel extension called "Dont Steal Mac OS X.kext" And it isn't actually a check. It asks the SMC kext to ask the SMC (Systems Management Controller present only on Apple hardware) for two values. It then installs a hook function with the kernel (and you can find dsmos_hook in the open source) which the kernel will call anytime it needs a page decrypted. The Dont Steal Mac OS X.kext then implements the function to use the decryption keys it retrieved from the SMC to AES-256 decrypt the pages that the kernel asks to be decrypted. I've done a write-up of this on my site which should hopefully demystify the process a bit.

Might it be that Psystar can win on that part due to the Lexmark decision? Maybe. Or maybe they get their ass handed to them because the replacement for Dont Steal Mac OS X.kext is a clone of Apple's kext that contains the keys as constant data instead of pulling the keys from the SMC. Those keys are copyrighted and presumably specifically registered with the copyright office as a separate work from OS X.

The bottom line is that this is basically a legal minefield and it looks as though it was specifically architected as such by Apple. Should Apple lose on one thing they can bring suit for something else.

Software

Submission + - BitThief Downloads Torrents Without Uploading

An anonymous reader writes: BitThief is a BitTorrent client developed by the Computer Engineering and Networks Laboratory in Zurich that manages to download torrents without uploading. Overall the downloads rates are a bit slower than with other clients, but on well-seeded torrents the performance of BitThief is comparable to any other client.

Slashdot Top Deals

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...