Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Comment Re:I've heard that before.... (Score 2, Informative) 158

selectors, which I believe can't be prebound (for you java programmers, these are equivalent to interfaces - C/C++ does not have this concept and instead allows direct access to the classes using protected or public)

I'm sorry, but this and most of the rest of your description is completely wrong. Selectors are nothing like Java interfaces. Interfaces are Java's version of Objective-C Protocols. Selectors are abstract method names (Smalltalk calls them symbols). Each Objective-C class has some data structure mapping these to function pointers.

Although I will agree with you that GPP is somewhat misinformed I take issue with your statement that selectors are nothing like Java Interfaces.

It is true that the class structure of Objective-C (one root NSObject class, at least in common practice) and the class structure of Java (one root Object class) are virtually identical. And it is true that an Objective-C protocol has feature parity with a Java Interface and when you think of formal interfaces in Java the equivalent to that in Objective-C is a protocol.

So Java has anObj instanceof SomeClass which will indicate that anObj is an instance of SomeClass or an instance of some other class that derives from SomeClass. The Objective-C equivalent to this is [anObj isKindOfClass:[SomeClass class]]. And Java has anObj instanceof SomeInterface where the equivalent in Objective-C is [anObj conformsToProtocol:@protocol(SomeInterface)].

But then Objective-C also has this nifty thing [anObj respondsToSelector:@selector(doSomething:)] which does exactly what it says and allows you to see if the object will respond to the doSomething selector that takes one argument. Java has no analogue to this. I mean, you can sort of fake it using reflection to find methods but it isn't quite the same thing.

The bottom line is that when a unique selector can be looked up like this each selector functions almost as if it were its own Java-style interface. There is clearly a parallel between Java if(anObj instanceof DoSomethingInterface) ((DoSomethingInterface)anObj).doSometing(1); and Objective-C if([anObj respondsToSelector:@selector(doSomething:)]) [(id)anObj doSomething: 1];

From a coding standpoint where I would think to use a respondsToSelector: in Objective-C I wind up making an interface containing exactly 1 method in Java. Sometimes it's the right choice to add the extra lines and make an interface (and if so, then you should add all the extra lines and make a protocol in Objective-C). But often times I find the required formality of Java to be a distraction.


Submission + - Apple to Ship Mac OS X Snow Leopard on August 28

okapi writes: Apple® today announced that Mac OS® X v10.6 Snow Leopard(TM) will go on sale Friday, August 28 at Apple's retail stores and Apple Authorized Resellers, and that Apple's online store is now accepting pre-orders.

Comment Re:Can someone explain this guy's logic to me (Score 1) 367

Err.. how would they be doing that.

It's simple really. Apparently there is a federal law saying that the power company cannot do separate metering for inbound and outbound power, as I suggested in my post, but must bill you in terms of net power usage.

That means they pay you the same amount for power you produce as for the power you consume. But most of the time the power you produce isn't worth anywhere near as much as the power you consume. And that's just talking about the power itself. On top of that you have the distribution cost which they are effectively paying you but which is borne by them.

I am not sure what Enronian accounting method you are using but there is no way that this can ever be profitable for the power company. The distribution cost alone guarantees a loss.

Comment Re:Capacity factor and those externalities (Score 1) 367

The electric power companies never did like solar and wind interconnects, especially from residential users, and maybe they have solid reasons for not liking them, apart from utility executives being Blue Meanies with sharp teeth where most people have their stomachs. Maybe a homeowner with a wind or solar setup is producing much less in the way of usable green power than they think and is increasing the use of expensive natural gas in less-than-efficient peaking plants. We are geeks, here, and we can come up with some reasonable back-of-the-envelope estimates of these effects, instead of lapsing into, "Oh the humanity, those EVIL power companies!!"

Well said. My dad was actually VP of Electric Supply at a power company. That meant he was in charge of everything relating supplying electricity. Generating it, handling peak loads, buying power from other companies when needed (hopefully not, it's expensive as hell), selling power to other companies when possible, and making sure the transmission infrastructure could support all of this.

The thing is, he isn't a business man by any means. He's an engineer. Has a bachelors in electrical engineering and a masters in nuclear engineering (yet never did get the chance to build a nuclear plant). He is an engineer's engineer if you will and what you mention are some of the exact points he has always brought up.

I have a low enough UID here that I can still vaguely remember the days when Slashdot was full of computer geeks and engineers. But it keeps trending more towards the ignorant masses who don't even attempt to rationalize why these "green" energy solutions might not be so easy to implement. It's all about the evil corporations and "the man" and oh those poor underdogs.

It's unfortunate too because at one time I thought that the smart people on the Internet would eventually be able to leverage the new medium to bring sense and reason back to the populace. But exactly the opposite has happened and the established media (e.g. right here with Denver ABC 7 News) has taken over and brought all of the bad things about media to the Internet.

Comment Re:Positive externalities are UNACCEPABLE! (Score 1) 367

We even get cheaper electricity out of the deal, without having to pay for the equipment

Bzzt.. Wrong. You can stop right there. In some places the power company does not pay a separate (reduced) rate for power supplied to the grid but instead just runs the meter backwards. In this case the power company is paying more, in fact MUCH more, for the power. And since they roll most of the transmission cost into the $/kWh rate they are actually paying their producing customers for transmission that the power company has to supply.

Does that sound fair to you? Does it sound like a good deal to the power company? It seems what they want to do with this monthly fee is get back some of the money for transmission costs that they shouldn't be paying out in the first place. Let me be clear here: those selling power to the grid aren't going to have to pay for it, they are simply going to get less money for the power they sell, more in line with the actual cost. That's fair.

Comment Re:Not completely outrageous (Score 1) 367

It may depend on locale but generally for the big transmission runs (e.g. the giant steel towers) the power company has to buy the land at market price if they put in new lines and had to buy the land at market price for the lines they have now. So no, that's not a subsidy. Now granted the locale forced the issue on the prior owner of the property but realistically the land is usually farm land or some such (or was at the time it was purchased) and if it remains farm land then it's not like the farmer can't still use most of the land, at least the parts between the big towers.

As for the local utility poles, I am not entirely sure. Generally they are placed on the easement on one side of the road. And often times in return for being able to put the poles there the power company becomes responsible for maintenance costs (e.g. mowing the grass). You wouldn't believe how much money is spent just keeping trees cut back so they don't grow around the lines.

Incidentally.. pro tip: Need mulch for cheap and don't really care what tree it came from? Contact the power company. They are usually more than willing to pull a dump truck up to your house and dump off however much you need for free. It sure as hell beats paying to have it disposed.

Comment Re:Can someone explain this guy's logic to me (Score 1) 367

I think the problem is that they do not currently charge two separate rates. They ought to as that is the most fair way to do it. I think though that they are just "running the meter backwards" so you get paid the same rate that you pay.

I suspect it would be somewhat costly to replace all of the meters with ones that measured incoming and outgoing independently so in lieu of this they'd prefer to just charge a connection fee under the assumption that it will be a decent estimate of how much the transmission grid was used.

Comment Re:Can someone explain this guy's logic to me (Score 1, Troll) 367

From the article: The monthly fee, which would pay for distribution and transmission of energy

So no, it is exactly as I described it. And what you suggest is exactly what they want to do, charge a monthly fee to be connected to the grid. There is a small difference though between your plan and theirs. Under their plan if you buy enough power from them they waive the fee because they figure they got enough money to cover the grid costs out of the combined power+transmission cost per kWh they generally charge for power.

Comment Re:Can someone explain this guy's logic to me (Score 4, Informative) 367

Actually no, it's about simple accounting for resources. It costs money to maintain the electric grid. There are two basic costs involved for you to receive power: 1) Cost of generating the power, 2) cost of transmitting that power. Ordinarily when you buy power from the power company they roll these together and charge you per kWh.

When you have your own on-site generation you have 3 basic states of use: 1) Using some amount of power from the grid, 2) using zero power from the grid, 3) putting power back into the grid. For state 1 and 2 you are simply charged for electricity as per usual. It's state 3 that's problematic.

The problem is that many people naively expect to get paid the same rate for energy they put back into the grid as energy they took from the grid. But the rate they paid to take energy from the grid was generation plus transmission. If the rate they are paid to put energy back into the grid is the same rate, e.g. "running the meter backwards" then they are effectively being paid for stealing.

The ideal fix for this is to have two meters. One for inbound power usage and one for outbound power supply. The customer would then have to pay for inbound usage at the normal rate and would be paid for supplying power at a reduced rate. That is, they would be paid for generation of the power but would not be paid for transmission of it because they did not themselves pay for transmission.

In lieu of this, the power company has found it easier to simply charge a connection fee to pay for this transmission. It looks bad to someone who is ignorant of the mechanics of power transmission and it doesn't seem particularly fair because it's apparently a flat fee that will be charged based on how much the company estimates the customer is using the grid to transmit power.

That said it is still more fair than what they are doing now which seems to be paying the customers who put power back into the grid for not only the generation, which they did provide, but also for the transmission, which they did not. The money has to come from somewhere and that somewhere is the companies bottom line. So the company will eventually petition to have the electricity rates raised to cover this cost which means that everybody else will have to pay more because some people think its cool that their meter actually runs backwards.

It's really not that difficult to understand. The problem here is that the reporter didn't check her facts or use logic or reason. Instead it's a he-said he-said story between the underdogs and the big bad evil corporation. She mentions in the article that she pressed the power company spokesman and got him to admit that "currently, no Xcel electric customers pay extra to fund solar connectivity fees. In reality, Xcel absorbs those fees." Then she goes on to say that "The money from the proposed fee would not go into the pockets of electric customers, but would go back to Xcel." This is true but no where near the whole story. Xcel has a fiduciary responsibility to account for resources used. Right now Xcel's resources are being used without payment and actually worse than that Xcel is actually paying someone else to use their resources. That is an untenable situation which can only be resolved by charging someone for it. This can be done by either correctly charging the customers who use these resources or, if this fails, by raising the rates for everyone. There are no other options. But Christin did not bother to point out the obvious here.

Comment Re:Read about this yesterday (Score 1) 254

Seems like the best solution for those who don't receive many SMS messages is to restrict SMS messaging to your phone. To do this, call AT&T at 1-800-331-0500 (or 611 from your phone), and ask to restrict text messages. According to the rep I spoke with just now, only AT&T's (free) text messages regarding changes in service, firmware upgrade info or plan info (e.g., how many minutes left or your bill) will go through.

Thanks. I just got my iPhone the other day and I didn't sign up for any SMS plans since I don't use SMS. I just called 611 from the phone and it was no trouble at all to get the plan changed from a la carte text messaging (at an outrageous 20 cents a text, even for incoming) to text messaging restricted. Now I don't have to worry about someone sending me a text message and getting charged for it and hopefully I don't have to worry about this bug since it will presumably be blocked at the server.

Thanks again.

Comment Re:Just like Linux (Score 3, Interesting) 391

Funny enough a few months back I made a very similar error if not the exact same error while coding on the bootloader for Darwin/x86. Except in my case it wasn't exactly a true error because in the bootloader I know that a page zero dereference isn't going to fault the machine but will instead just read something out of the IVT.

So as I recall it seemed perfectly reasonable to go ahead and initialize a local variable with the contents of something in the zero page and then check for null and end the function. But GCC had other ideas. It assumed that because I had dereferenced the pointer a few lines above that the pointer must not be NULL so it just stripped my NULL check out completely. Had it warned about this like "warning: pointer is dereferenced before checking for NULL, removing NULL check" then that would have been great. But there was no warning so I wound up sitting in GDB (via VMware debug stub) stepping through the code then looking at the disassembly until I realized that.. oops.. the compiler assumed that this code would never be reached because in user-land it would have segfaulted 4 lines ago if the pointer was indeed NULL.

Obviously the fix is simple. Declare the variable but don't initialize it at that time. Do the null check and return if null. Then initialize the variable. If using C99 or C++ then you can actually defer the local variable declaration until after you've done the NULL check which IMO is preferable. It may be that the guy wrote it as C99 (where you can do this) then went oops, the compiler won't accept that in older C and simply moved the declaration and initialization statement up to the top of the function instead of splitting the declaration from the initialization. My recollection of how I managed to introduce this bug myself is shady but as I recall it was something like that.

Comment Re:Yeah... (Score 1) 1057

I don't know the scientific community was pretty adamant in its consensus against the Bush administration on this.

More the case that the bulk of the media was adamant in its consensus against the Bush administration and made sure to put only the scientists on the air who would say things against the Bush administration. To appear to be fair they'd occasionally put republican pundits on to counter the scientists. But they wouldn't usually put a scientist on with the view that climate change might not fully be explained as man made. And if they did they'd simply accuse him of being a republican shill and make all sorts of specious arguments to discredit him in the eyes of their viewers.

This guy sounds like a holdover from individuals hired by the previous administration to refute the rest of the scientific community.

If you FTFL (followed the fucking link) you'd see this: Senior Economist, U.S. Environmental Protection Agency, Washington, DC, 1971 to present

So here's a guy with a masters in physics and a PhD in economics who's been working for the EPA since the Nixon administration. But, oh, wait. He isn't an "environmental scientist" so his opinion doesn't matter. That's the best trick yet that's been used to discredit everyone who doesn't toe the party line that climate change is fully or mostly caused by man and that we must take drastic action to attempt to reverse it.

The rub I have with that trick is that environmental science seems to have as one of its axioms the presupposition that climate change is man made. That is to say that environmental scientists presume that statement to be true without having to back it up. Then they focus on researching ways man can change his behavior to have less of an impact on the environment.

Saying that environmental scientists have formed a consensus that man is causing global warming is like saying that cattle ranchers have formed a consensus that beef is the best meat.

The other thing I hate about this whole debate is that ultimately it is not one of science at all. The question is not "does man have an effect on the earth" because the answer to that question is undoubtedly a resounding YES. The act of you simply breathing has an effect on the earth. So we can get more specific and ask questions like "is man directly responsible for rising global temperatures?" and "are we going to cause the planet to become uninhabitable?" and "if so, how long do we have?".

The answers to those questions are a lot murkier and there has been a fair amount of bogus research out there. One great example is the whole "hockey stick graph." Intended to show how much more temperatures have risen in the 20th century when compared with prior centuries it instead showed the result the "scientist" expected. It showed that result because he explicitly coded the program and input data to it based on the assumptions he had been making. The resulting visualization was, of course, exactly what he expected to see. Garbage in, garbage out.

So what we have is a feedback loop where the environmental scientists are all doing research from their assumptions and from past assumptions. There is very little truly "hard data" available in this field. That is simply due to the nature of it. We did not record temperatures until relatively recently. We did not look at what the polar ice caps were doing until relatively recently. For all we know, the temperatures might have risen and fallen on cycles for years. For all we know, the polar ice caps have been growing and shrinking for years.

In lieu of hard data, environmental science tries to come up with methods of interpolating this data based on other observations that were recorded or based on archeology that we can do now. But we can't test what the temperature of something was so we have to try to count the size of tree rings and then try to write a formula that will relate tree rings to what the temperature probably was. But even then there's a shit ton of other variables going in to how much a tree grew during a given season. And worst of all, the formulas they use to make these calculations are written by the people who want to see the result that temperatures were lower and more steady in past centuries.

So all of the interpolated data they use is based on their own assumptions of how it should be interpolated. The assumptions of people with a vested interest in claiming that man-made global warming is occurring.

Thus the answers to the question of how much of an effect are we having on the environment is very very difficult to answer and the only "consensus" is from a field of scientists whose field it is to form a consensus and work from there. Therefore, there is effectively no real consensus at all, just the assumptions made by these people. And as to the questions of how long do we have you're dealing with something where you have to predict the future and try to come up with ways to model it hoping that your assumptions are not wrong. And again you have a situation where there is consensus by design because the environmental scientists aren't questioning the assumptions of each other.

And as we can clearly see here, when any scientist dares question the assumptions he is attacked for it, particularly when it is in the realm of politics. And the reason for this is that the politicians don't care what the true answers are. They don't want real research with all of its murkiness and footnotes. They want a group of people who will unequivocally say that man is causing severe climate change and that we must stop it at all costs. And they want it precisely because of the "at all costs" part because that gives the politicians more power. Fear is a great tool of the politician.

Comment Re:You know it's bad when (Score 1) 137

I dunno about you, but installing XP on new standalone hardware (using our legacy VLK licences) is a royal pain in the bum these days. Needing a floppy disk to install the SATA drivers, or patching the OS ISO, futzing around trying to find compatible sound card drivers, wireless network card drivers, the multitude of patches (thank $deity for SP3 rollup, it was getting rediculous post-patching SP2 even with WSUS).

It's only hard if you're actually using CDs and F6 floppies. You can ditch the CD by using Windows Deployment Services (WDS) in Legacy (i.e. RIS) mode. There is even a thing called for Linux systems that will run on port 4011 and speak the BINL protocol (among others) so you can run this on a Linux server along with tftpd and Samba.

If you put it in OEM mode it'll even copy entire folders onto the C: drive during text-mode setup. So what I do is I have it copy a Drivers folder to C:\ with the common chipset, disk, and ethernet drivers. I have a few video drivers (like Intel GMA) in there as well.

The only catch is that when you put it in OEM mode (this is a setting in i386\Templates\winnt.sif) you lose the ability to do recovery-mode setup and you lose the ability to use F6 floppies. The workaround to this is that you edit txtsetup.sif and include your "RAID" driver. Typically these days you're talking about Intel, AMD/ATI, and nVidia AHCI controllers. Most of our machines are using Intel but the other day I ran into a Toshiba laptop with AMD. Once I was able to figure out how to download the AHCI driver (AMD's site is crap!) I was able to integrate it in just like the Intel one.

The only gotcha here is that I only modify TXTSETUP.SIF a very small amount so that the text-mode setup can use the driver and it can put it in the start services list for GUI mode setup. There are ways to make it also install it (as if it were part of Windows) but I eschew this in favor of simply making sure the driver is in a subfolder of the C:\drivers dir which is listed in the OemPnpDriversPath.

So what happens in this case is that net booting to text-mode setup uses the BINL server to figure out which ethernet driver .sys file must be loaded then TFTP loads TXTSETUP.SIF which tells it what drivers to load (including all disk drivers built in to Windows). Once it has TFTP loaded everything it starts the NT kernel at which point it is able to use SMB because it has started the correct network driver. Assuming your Windows setup files have the disk driver (i.e. from Microsoft or you edited TXTSETUP.SIF to add it) then it will also have the disk driver and will show you the partitions on your disk.

Basically it's like any other XP setup where it loads a bunch of stuff onto the HD during the text-mode phase then reboots. At this point the system is kind of installed. That is, it booted from C:\WINDOWS using the normal NTLDR boot process. However, it is only using whatever drivers text-mode setup told it to start which is really only the base drivers and the disk driver (in particular, not the ethernet driver yet). The first thing it does is do the hardware detection. This is where the magic of OemPnPDriversPath comes in because in addition to using all of the built-in drivers it also consults all the drivers in this path you specify.

Assuming you made sure at least your ethernet and disk drivers were in there then when you finish setup you'll have enough drivers to do a windows update which will often get you a lot of the extra drivers. And if not, you're on the internet so it's not that hard to go find them.

As of now, the chipset, ethernet, and AHCI drivers are very easy to come by for XP. So XP isn't "dead" by any means. In fact, MOST of the decent OEMs (e.g. HP/Compaq, Lenovo) are still supporting XP, particularly for their business-oriented lines.

Where you run into trouble are things like Toshiba laptops. Toshiba has always had IMO a rather odd way to do their platform drivers and their new models have the drivers buried in some sort of huge "value-add" package that only installs on Vista. I am not sure if it's possible to dig out the drivers but basically you lose all the special keys and the volume control without them. If you can live without these then you can keep running XP and avoid the mess that is Vista.

The worst part about it is that Vista isn't even all that bad. I run it on my work laptop (Apple MBP) just fine. But in that case I am using a full copy of it, not some copy with OEM crapware.

Comment Re:Ridiculous (Score 1) 344

It's the ORM layer that's the real pain in the arse (assuming you're using OOD, and assuming you actually want a direct mapping between your object model and relational model). Things like Hibernate and judicious use of code generate make it a lot easier, but you still need to know what's going on and you still need to (and can!) choose between navigating among objects (letting the ORM do the queries) and generating a hand-written query. To some extent an ORM (and the RDBMS vs OODBMS choice) is just a reflection of the different requirements of on-disk vs in-memory representations of objects. On-disk storage is all about efficient and flexible querying, retrieval, (distributed) concurrency, storage and management of huge data-sets, whereas in-memory storage is all about assigning behaviour and navigating relationships between smaller sets of objects whilst carrying out that behaviour.

Well there's your problem. Hibernate. Hibernate basically just pulls tuples in to objects but doesn't really do a very good job of managing the object graph. My experience with it has been that users of Hibernate have to actually write the code that pulls in related objects. So your customers table can have a getInvoices() method but at some point you actually have to write how you want to retrieve the invoices given a customer. In the end you also seem to have this situation where you call save on a "root" object and it saves that object and any objects related to that object recursively. But.. that's not relational. That's hierarchical. FAIL.

A more fully-featured design has you describe the tables and their relationships in some sort of a data file. It could be one giant XML file or a number of XML files (like one for each Entity/Table) or even something simpler like a plist file (just a serialized hierarchical key/value dictionary).

Probably the best example of this is the Enterprise Objects Framework (EOF) component of WebObjects. Everything you do is done through an "editing context" that is somewhat akin to a database transaction. Typically you pull in your "root" objects by asking the editing context to do a query for you. From then on out you just ask your customer object for invoices (to-many) or for account manager (to-one) or whatever. The editing context records all of the changes you make relative to what was fetched from the database and when it comes time to save them you ask the EC to save and it figures it all out no matter how complex your object graph is.

More advanced usage allows for the ability to fake attributes and even relationships giving you attributes that are actually sums or averages or whatever of some related data.

Apple themselves took this concept, pared it down, and brought it back from Java to Objective-C as "Core Data". By pared down I mean it's hardwired to use a SQLite store vs. any old SQL database. Well there is an XML store as well but we won't get into that here.

Other similar options include Apache Cayenne (Java) and Telerik ORM (.NET). Those I have not explored as fully as WebObjects but at the basic level they seem to be structured in much the same way. The one thing I haven't yet figured out with those is how you would go about doing some of the derived attribute stuff I did in a fairly large WO project. Basically what I had in WO was a mapping between three very different database servers that all stored somewhat related data. But there were some caveats like one database would store a compound key like 'XXXX','YYY' and another would store one column 'XXXX-YYY'. EOF, if you knew what you were doing, would handle this effortlessly and you could actually join across these without problem. For instance, in the table with the compound key you could define an attribute newStyleNumber with a read format of (column1 || '-' || column2) and it would handle it (mind the PostgreSQL string concatenation there). Obviously I'm now breaking the "pureness" of the ORM but the point was that you wouldn't actually then use that attribute in code but it was a slick way to pass more or less raw SQL to the DB so it could now do a join with that. Sometimes you simply cannot change the underlying data you have and being able to actually use little SQL snippets while still letting the framework do the heavy lifting is a huge win.

Anyway, the point of all this is that the RDBMS don't suck. What sucks is how people are accessing them. Basically treating it as raw collections of tuples. Sure, that's what they are, but that's not usually how you as the programmer want to view them. Doing it that way you wind up having to figure out all of the joins yourself in the middle of your business logic code which is just crap. You shouldn't have to be writing that stuff or thinking that stuff mid-stream.

Comment Re:Really a surprise? (Score 5, Informative) 493

That's way off base. There are no context switches when making a library call. Context switches occur when you ask the kernel to do something by making a syscall. So memcpy or memcmp don't incur a context switch. Nor do fopen or fread in and of themselves cause context switches. But one will occur when the underlying open and read calls are made.

What's really needed here is a profiler to find where the code is spending the bulk of its time. My guess is that it's a compiler issue. And other comments about the windows build using profile guided optimization tell me my guess is probably right.

Slashdot Top Deals

You can tune a piano, but you can't tuna fish. You can tune a filesystem, but you can't tuna fish. -- from the tunefs(8) man page