Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Re:Energy density (Score 1) 380

That's great, but unless society can get used to having electricity only when the sun is shining, we require at least one of three things (or a combination of them):

  - Ultra long-range power transmission lines with low enough loss as to be economical, so we can transmit power from an illuminated part of the earth to a part currently experiencing night time (or very thick cloud cover, which is pretty close to the same thing). These, of course, will require large investments in infrastructure for power transmission across continents in extremely large quantities, and I'm not sure we have the technology to do so without losing the majority of the energy during transit. What kind of voltage would you need to, say, transmit power from New York to London while keeping at least 66% of it on the receiving end? Not to mention the political and economic complexities of managing this, and the security risks as well. We'd need backup plants (probably of a more traditional variety) ready to fire up at a moment's notice if something were to happen to the supply coming from another country.

  - Enormous amounts of energy storage. This is currently a major issue for us. Batteries are expensive, and all the *good* batteries actually DO require a tremendous amount of rare materials like platinum. Supercapacitors have been talked about as a potential replacement for batteries for a very long time, but no one has been able to get all the desirable characteristics into one device (low cost / easy to manufacture, high energy density, minimal loss over a storage time of at least 12 hours, and large number of charge cycles before replacement).

- Other sources of power to cover the times when the local/regional solar output can't cover demand. If those other sources end up not being wind or hydro or geothermal (due to geological or meteorological conditions), it's probably going to be nuclear fission (nasty waste) or fossil fuels (doesn't solve the problem we're trying to solve). Having to run nuclear 50% of the time means you may as well run it 100% of the time, because of the high cost and time investment required to start up a nuclear plant. Fossil fuel plants are more flexible, so we could actually cut our fossil fuel use by 50% using this scheme, but that still leaves the other 50% on the table, which isn't so great and sends the wrong message.

An effective system would probably use a combination of all three of these measures to try and deal with the many logistical problems of solar, but the unfortunate fact is that we're very deficient on the materials science, manufacturing technology, political will, and raw materials that would be required to comprehensively cover up solar's limitations by strategically employing all three of these methods.

Without being able to solve these problems at a national and eventually global level, you will end up with an extremely inefficient system, and the inefficiencies in it will cause a "death by a thousand cuts" type problem, where your resulting solution provides intermittent blackouts to most customers (or alternatively, no blackouts but a large percentage of the time running off of other energy sources); costs way more than their old fossil-based power; and requires a significant amount of traditional power plants to still run to cover up for the worst of its problems.

Oh, and you also seem to have based your math on the assumption that per capita power demands won't increase. Unfortunately, in order to comprehensively eliminate fossil fuels, we'd need to convert the vehicle fleet to EV (or at least plugin hybrid), which means per capita electricity demands are going to skyrocket.

At the very least, we're going to need even more nuclear power plants to provide a strong base load in the future. Solar might enable us to shut down numerous coal power plants at least some of the time, but you'd have to over-engineer your effort to satisfy the world's energy demands by a factor of eight or ten in order to compensate for: increasing demand per capita, increasing number of people, efficiency losses due to transmission distance, efficiency losses due to storage, and relatively unpredictable loss of production due to weather phenomena and the constantly-shifting day/night cycle. Some places that spend a lot of time with small amounts of daylight would have to be completely dependent upon external suppliers of energy, or else produce their own energy using traditional methods.

Take into account all of that, plus the fact that we are not currently producing photovoltaic cells at a rate that is exceeding the current increase in power consumption year over year [citation needed], and it seems that we would have to embark upon a worldwide project the scale of which has never been attempted before, in order to do this.

And politically, most people just *aren't willing* to sink that type of investment into a system, even if it's ultimately for our own good.

To wrap it up:

1. Rare earths play a significant role in the *total solution* for solar, particularly where energy storage is concerned.
2. No one is willing to bankroll a project large enough to make solar the simple majority energy source of humanity (50.00001%), let alone anything more than that.
3. Making solar cheap and efficient would require significant advances in science and engineering that may not be reachable for another couple of decades at the earliest.
4. Unless we keep *most* (if not all) of our existing power production facilities operational and fueled and ready to come online, I don't see us having the means to solve the day/night reliability problem of solar (the energy storage demands are just too great). And keeping the old guard around will be extremely costly and unprofitable if they aren't operational 24/7.

Comment Vala (Score 1) 292

Vala translates syntax very similar to C# into idiomatic C using GLib for object-based programming (inheritance, encapsulation, events, etc. are all supported). Hundreds of lines of Vala spits out thousands of lines of boilerplate C. You get native code that's nice and fast (reference counting is faster than GC, and you have no intermediary language like .NET/Java since it compiles to C which compiles to native). A couple of programs on popular Linux distros use Vala.

It's a great language for plugin development, too. Unlike languages such as Python, bindings to C/GLib libraries do not require any compiled native code or runtime integration, since Vala has no special runtime outside of GLib.

And, if for some reason Vala's development stalls and you find yourself unable to compile changes made in your Vala code, you can always take your completed project's generated C code and switch over to that being your main source code. It's less maintainable due to the increased amount of boilerplate stuff, but there are plenty of large projects that contain manually hacked idiomatic C/GLib code that's functionally equivalent to what Vala's compiler generates anyway, for all kinds of patterns, like inheritance, signals, properties, etc.

Worst case, it saves you some time instead of having to write all that boilerplate code by hand. Best case, it saves you *A LOT* of time, by being able to write code that reads like C#.NET code, but without the runtime bindings (which constitute both a deployment headache and a source of inefficiency).

Comment Energy density (Score 2, Informative) 380

It's there, but we have to put an absurd amount of relatively rare resources into photovoltaic cells to make *use* of that energy. Otherwise, it performs a very useful function in that it gives us this thing called heat, so we don't all freeze to death and die (and so we can have an ecosystem of animals and plants that we don't have to keep in climate-controlled environments that also won't freeze and die).

No, I'm sorry, but the primary purpose of the sun is to give us energy in the form of light for plants and heat for everything else (including plants). With current technology we can't make use of the sun well enough to meet more than a fraction of our energy demands.

Now, if we had a steadily shrinking worldwide population, we might be able to do it, since we'd have more and more surplus energy every year without even doing anything -- which means if we continue to increase production of renewables like solar and wind year over year, and population decreases, it's mathematically certain that at a point not too far in the future the two will intersect and we can shut down the last coal/diesel/natural gas/nuclear plants.

But unless you can find a way to cause the population to shrink worldwide year over year in a controlled, preferably non-violent manner, I don't see a way that renewables will ever become dominant. It's not economically feasible. We can't divert enough resources to making solar panels and wind farms to meet energy demands, even if we cut worldwide energy demands per capita by 25% immediately and cut the energy use of the most energy-intensive top 5% by 75%. Even with such unrealistic and aggressive cuts in per capita consumption, an exponentially increasing population will ultimately make the exercise pointless.

Comment Re: buh, bye (Score 2) 494

As a Democrat who's more liberal than nearly all the running Democratic candidates, I could see myself being content to let the country be run by most of the Republican presidential candidates or elects from about Eisenhower up until and not including Dubya. Eisenhower is my favorite Republican of all time; Nixon did a few things right and many things wrong; we could've done a lot worse than George H. W. Bush; Reagan was okay because most of his crazier ideas didn't get implemented, and the ones that did were beneficial or not very harmful; and Dubya was disastrous.

I wouldn't have voted for them, but they wielded the responsibility of the Presidency pretty well overall, and occasionally supported more liberal initiatives like government-funded space exploration, social programs, and civil liberties. In fact, defending civil liberties was the marching order for the Republican party for a long time.

I also believe that the way our President would swing between being Democrat and Republican every couple of years was a big contributor to making the country a better place overall. Each party and each President would have something on their agenda and would address a severe problem, which meant that as long as we kept switching parties, we'd be okay - and each party, each President, would bring their own, net-beneficial changes to the table. We'd "be okay" as long as this kept going.

The problem today is that there is an extremely small and nitpicky difference in policy between the most popular (i.e. most likely to be elected) candidates on both sides. Since popularity is more or less a positive feedback loop, this all but guarantees that, even this early in the election season, we have a good sense of either Hillary or Trump being our next POTUS. And their views are close enough that, in the past, you could've roped them together into one party.

Now, in BOTH parties, anyone in favor of civil liberties and against big government and mass surveillance is marginalized into the fringes and will almost certainly not get past the primaries. We live in a fear society. Promoting fear and "big government will protect you" gets you votes. A warmongering foreign policy is popular. The military-industrial complex is popular because of all the useless bureaucratic desk jobs it opens up. Eisenhower must be rolling in his grave.

Comment Re:We are stupid (Score 1) 378

Better yet, they can collude with each other to make this an "industry standard" with some bullshit justification for why they need it; then, with 100% of the mass-producing printer/scanner manufacturers doing it, customers will have no recourse.

The second-best thing after a monopoly is an oligopoly, and in an industry that's shrinking because it's being replaced by something faster and cheaper (namely, using computers and the Internet instead of paper), they'll do anything possible to cut costs or raise revenue. They do it in growing industries that have a bright future ahead, too, but the losers are even more strongly motivated to do it in a last-ditch attempt to stay open for longer.

This is what happens when you give personhood to faceless entities that have no sense of morality and their only loyalty is to the almighty dollar, and then let those entities run society.

Comment Re:Amazing (Score 1) 492

Don't forget, politicians will say anything to get someone to vote for them, and then do an about-face later and happily screw them over once they are in office.

Vote for Trump on this issue, and once he's President, he'll let the Indians come in and take over the country by issuing unlimited work visas.

Comment Bus Factor (Score 5, Insightful) 157

With all due to respect to Harlan Stenn, and working under the assumption that he will choose to continue to maintain NTP for the good of everyone who uses it, the biggest donation that could possibly be given to the NTP project would be to increase its bus factor. Basically, we need at least another small handful of people -- ideally distributed throughout the world -- who have the same level of knowledge and expertise as Harlan in the area of network time, and can thus take his place if, for any reason whatsoever Harlan can't continue to work on the NTP project.

Getting Harlan to continue working on it is a short-term solution, but the sustainable future is to ensure that we have maintainers who can take his place -- ideally, paid ones.

So what we need is for a company like Red Hat or IBM or Microsoft or Canonical to bankroll a developer who has at least strong fundamentals that would enable them to quickly pick up advanced knowledge of network time, and then spend most of their working hours acquiring more knowledge about it so that it can be maintained going forward. This would probably involve a lot of ML posts with Harlan (or reading his previous ones), as well as any other developers/maintainers working on pieces of the code.

If Harlan is absolutely instrumental to the project as it stands now, the solution is to have a backup or two, who ideally are being paid a living wage to ensure the continuity of knowledge and expertise if Harlan willingly or unwillingly stopped contributing.

Projects with a bus factor of 1 that are widely relied upon need to be identified and highlighted every now and again -- not to make a case to shower the developer in money, but to get other developers to work in the same space and increase the bus factor to at least 3.

Comment Re:Depends on what you do with the data (Score 1) 170

This is a good point, and certainly makes a lot of sense.

As a more concrete example of how this can affect salaried IT workers -- for those who are not familiar with how we operate -- there was a time, a little over a month ago, where I worked 10 hours solid, and barely stopped for 10 minutes to nibble some lunch while working (I was less productive typing with 1 hand, but otherwise was still getting work done even during lunch). I never even checked my personal email the entire day, let alone visit Slashdot or anything else I would do on a normal day.

Why? Because I had a high-priority task that was due the same day, and I was assigned to get it done as soon as possible. On top of that, several additional requests came up during the day that delayed my progress on the original assignment. Nearly from the time I walked in the door until I was ready to go home -- after working two hours more than I'm officially required to -- my brain was basically 100% utilized doing productive work. I only took one bathroom break the entire day!

My management knows that I'm able and willing to do this when required, but the reality of my work is that, often, I'm simply not required to be fully utilized. My company is more than welcome to give me additional assignments to increase my utilization, and they actually do, on occasion. I assume that if they felt my time was not worth the output they were getting, I would be separated from the company. Since that hasn't happened and I've received consistent positive feedback from both customers and management, I don't feel bad at all about taking some downtime when I want/need to.

So that just obviates the question of why, exactly, these same employers feel the need to deploy such pervasive monitoring and work tracking systems. Are they doing it just because the technology is there, and some salesman convinced them it would increase productivity? Are they doing it out of fear of not detecting the 1-5% of the workforce that are actually bad apples and are in fact not getting their work done in a timely manner?

Whatever the reason, they should realize that it just makes life harder for the majority who will slave away tirelessly if the job calls for it, and won't if the job doesn't. It makes it harder to enjoy the downtime for what it is when you can feel their eyes on your keystrokes.

Comment Re:Depends on what you do with the data (Score 1) 170

Yes, there is potential to become over-confident and careless; but someone who's serious about this type of behavior would constantly work to step up their game and make their behavior harder to detect. Also consider that a worker who's doing a job that is actually, genuinely easy for them to do, and has time to spare after completing all assignments on-time and *properly* (not even half-assedly), can legitimately slack off for the remaining time and the bosses shouldn't have a reason to say anything bad about them.

Also:

1. These days, in many environments, "the geek who seems to be drawing on his bag of tricks" could be anyone or everyone in the office. Are you going to fire your entire workforce? Or what if your top performers -- the people who actually get work done, and do so efficiently -- are the same people skirting your rules? Do you take the loss of a productive employee just so that the remaining employees are compliant with your network policy?

2. It's only bad for morale if others are aware of their behavior.

2a. It's only bad for security if the circumvention methods are being used to (deliberately or accidentally) exfiltrate sensitive data or cause malicious code to gain access to the network. Sure, you could have someone who's smart enough to set up a VPN but dumb enough to download a virus or visit a site with ads that exploit a Flash zero-day; but you're probably just as likely to be compromised by an employee who does not use any special techniques at all, and simply visits an ordinary site (during lunch break) that's been compromised by a bad actor or runs a malicious advertisement on your outdated "standard" browser that's chock-full of unpatched vulnerabilities.

Also, if you're *that* concerned about security, you shouldn't be allowing your employees to access the public Internet from a machine with access to internal resources or company data. Give them a separate, airgapped machine and monitor their time using it vs. their "business" machine.

2b. Bad for discipline? Sure, that's a valid argument. But no human being is so disciplined as to never go off-task. A rational, human-centric way of dealing with discipline is to enforce the minimum amount of discipline necessary for your workers to get done the assignments put before them, and don't expect 100% unrelenting focus on performing as much work as physically possible for 8 hours per day, every day. The amount of attention they need to pay to their job depends on how busy their job is. A store manager at the busiest Home Depot in the United States is going to have less downtime during the day than a security guard at a backwater office building in an area with very low crime and an office full of happy employees. If the security guard is skipping patrols or the store manager is watching Youtube instead of taking care of customers, that's a discipline problem. But most white collar workers spend at least some of their time waiting for other people to do stuff, and they should be allowed to have a little rest and mental relaxation while they do so.

And that point in 2b brings us to a point about income inequality. Although you don't need a degree to manage a Home Depot, I would be perfectly fine with an overworked store manager who's constantly got to be in "Go" mode, making much more than I do as a white-collar worker with several hours of downtime per week when I can slack off WITHOUT shirking my duties. In reality, I probably make more than them. If the compensation were reversed, I'd be fine with that - they work harder, so they deserve more pay.

Now, you might say that there's always something I could be doing instead of having downtime; but my rebuttal to that is I'm always coming up with new ideas and taking initiative to try and improve process and workflow at my job. I've been here for a number of years now, and most of the big improvements I identified have already been implemented in my first year or two, because I couldn't stand how cumbersome things were when I got here. The remaining improvements that could be made are mostly cost-benefit losers; it would cost more money to implement them than would be saved by using them.

And let's not forget that not all of my downtime is spent watching YouTube videos of cats saying "yum yum yum" as they eat their food. I spend a lot of my time pursuing side projects; reading about the latest tech trends; brushing up on my skills; challenging myself to explore a new technology or software; or becoming aware of world events and thinking about how they could affect my company or my job. I would wager that most white-collar workers are in the same boat, if they have downtime at work.

All I'm trying to say is, there's no necessary tie between people who deviate from the rules, circumvent restrictions, etc. and bad performance. Bad performance is itself fairly trivial to identify WITHOUT understanding what employees do with their time in the office: one simply has to look at the work outputs of the individuals.

I'm not talking about metrics that can be gamed; actually manually look at the evidence of the worker producing whatever product it is they're supposed to be producing. Is the work there at all? What's the quality like? Is there evidence that they spent the appropriate time on the work, or does it look rushed? Is the customer (internal or external) happy with the product? These are simple questions that can be answered trivially by any first-level manager who keeps even a peripheral awareness of their employees, and does not need to be tied to surveillance of the employees or enforcing some kind of restrictive web filter that prevents them from "slacking off" by going to Facebook, etc.

Full disclosure: I am a "worker bee" at a white collar software job who periodically doubles as a team lead. Some of the best people I've worked with have comfortably set up and used measures designed to bypass or evade automated surveillance software like keyloggers and web filters. Since I am responsible for the work output of my team, I have a personal motivation, for the sake of my career, to ensure that the work gets done and gets done well; but even so, I've had no problems with the work output of the people who used these circumvention mechanisms.

To the contrary, I find that a good chunk of the employees who completely eschew any kind of non-work-related "defocusing" while on the job tend to be less productive, have less of a penchant for creative thinking, are less likely to become aware of product quality problems early in development, and are less likely to take initiative to improve inefficient processes or voice concerns when something doesn't quite add up. If it's 2:00 in the afternoon and a coworker is off-task, 9 times out of 10 I'll recognize that they are simply clearing their mind, and will get their work done once they feel up to it -- even if that means they have to stay later in the evening and "work longer" (we get paid an annual salary so it doesn't matter) to get it done.

We're all professionals here, and fortunately for us, the exact timing of our work output is somewhat flexible. Does it matter if Joe Developer gets his coding done at 8 AM or 2 PM? Not really, as long as it's correct, efficient, compliant with the requirements, and mostly bug-free. Those who don't do their job (well, or at all) are the ones who get escorted out of the building before their next appraisal, not (necessarily!) the guy who uses a proxy so he can visit Reddit midday to read up on the latest GPU hardware reviews.

Comment Depends on what you do with the data (Score 5, Insightful) 170

It all depends on what you do with the data. The mere act of passively collecting the data is relatively benign, assuming that no action is ever taken with it and that it's securely stored away so that it can't be exfiltrated or abused. There ARE privacy concerns with this, of course, but most corporate networks explicitly state that users should have no expectation of privacy.

If your boss receives an email for every 5 minutes you spend on Slashdot or Reddit or Anandtech, and marches down to your cube and sternly tells you to get back on task, that solution will only improve productivity in the very near term. The worker will fear for their job, so they'll do their work more and go off-task less. But that will stop being effective as soon as the worker can leave to find another job, or come up with an alternative way to go off-task while avoiding detection, or half-heartedly do their work in a way that appears to show progress but isn't really (e.g. gaming the metrics). The end-game of "cracking the whip" is almost never a worker who willingly spends less time doing whatever they really would rather be doing besides working and suddenly enjoys their work more.

If, however, you collect all the data in aggregate and then discuss it during their annual performance review, and have it play a factor in their compensation, that could definitely be a strong motivator for people not to be off-task: if they associate slacking off with getting lower raises / bonuses / etc. and steady work output with higher compensation, most people will probably try to slack off *less*, at least. It also has the side effect of saving the company some money by being able to justify not giving a raise to someone who spends most of their time slacking off.

Either way, though, there is always going to be a way to game the system. If they track you at the network level, just use a proxy or VPN to an address that looks like it's on-task, or is too vague to get a sense of what exactly it is (e.g., since many sites use EC2 or S3 to serve content for all sorts of purposes, there's not a lot you can say about whether traffic to an EC2 box is business-related - maybe they're doing actual research for their white collar job?). If they're keylogging, set up a VM and plug in a USB keyboard straight into the VM. If you have decent cellular data at your desk, you could do your thing on a smartphone, assuming you can tolerate the display and input device limitations. Or of course you can just take frequent breaks into a hallway or empty conference room and use your own laptop/tablet/smartphone.

The only way to truly keep white-collar workers on task for 8 solid hours per day is to assign one supervisor per worker bee, but the overhead of that proposition is so high that no one will do it, because the costs will far outweigh the benefits.

Or there's Manna, http://marshallbrain.com/manna... which could be a possible future if AI or a close-enough approximation thereof turns out to be feasible.

Comment Major performance problems at that spec level (Score 0) 141

With specs like that -- the worst of it being the low amount of RAM and the likely extremely slow NAND -- that phone will probably have severe performance problems with many popular apps, even some of the Google apps. I have an old "Android-on-a-stick" device with similar specs from a few years ago that can barely run the Play Store now.

And I'm not even talking about games. Web browsers, navigation apps, media players, voice assistance, productivity apps, and even shopping list apps have seen increases in their performance demands. They're doing more I/O and have more dynamic functionality than ever before.

From my experience, you're mostly fine right now if you're running at least a Snapdragon S4 Pro or later (or comparable from other manufacturers). If you have something that benchmarks much slower than that, which is likely to be the case for a $10 SoC (MediaTek?), many common apps will be unbearably slow, even if your network is fast. And the RAM factors in once you consider how many background services are running on Android devices these days. I think my Note 4 has more services running than my Windows 10 desktop that has the kitchen sink of third-party software installed.

I get what they're trying to do, but people are going to be unhappy with these devices if they try to use them for much more than a literal cellphone.

Comment Re:Thanks anonymous reader! (Score 1) 294

Privacy is important, indeed, but I wonder if this will also break functionality on some websites. What if the final "Buy Now" function in one of your apps is a link rather than a button? You hover over it, thinking about it; but little do you know, your browser has already made the decision for you. When you realize your bank account doesn't have enough money for the purchase, you decide not to place the order, but then you check your email and have an order confirmation ID from the vendor.

Ouch.

Comment You cannot teach creativity (Score 3, Interesting) 207

The most galling fallacy in this short statement isn't that he thinks "geeks" aren't creative; it's that he thinks art education makes people creative. Here's some news for you: it doesn't.

The MOST an art class can teach you is to learn how to follow the design memes of people who came before you. However, this is not necessarily a good thing. Those design features may have been very creative and engaging when they first started being incorporated into works, but if they are used in such a widespread way as to be monotonous, it actually makes a product *worse* to start throwing them in.

Consider, for instance, how many games have a soundtrack that is extremely similar to every other game in their genre. It's not similar enough to lead to a copyright infringement lawsuit -- usually -- but it's "generic" in the sense that it borrows 90% of its design features from past works, whether previous titles from the same developer or competitors. These soundtracks often receive poor reviews when they don't stand out in any particular way from the other games that came before, and players tend not to remember the music after they stop playing the game.

On the other hand, the best, most memorable and enjoyable game music soundtracks that have existed have all been extremely original, with major innovative design features that give a distinct "feel" or "sound" to the title. This can be VERY powerful and greatly boost the sales of the product.

Similar comparisons can be made of visual assets in games, of course.

The problem is, even though you can teach someone to mimic what's been done in the past and grade them on their ability to do so, you can't teach people to be able to come up with entirely new design features or concepts on their own. And if you tried to grade an art class based on how unique or original the design features were, most students at the high school and 4-year degree level would fail the class because they couldn't think of anything creative that was also good (you could technically consider any random selection of features to be "unique", but not all things that are unique are beautiful, appreciable, or easily digestible by the person accessing (reading/viewing) the work.)

Most truly creative, novel design features that win awards and universal acclaim happen *spontaneously*, without any sort of directed methodology used to derive the aspects chosen. Sure, the creator may digest some existing art aspects of the game as "input" when trying to determine how to come up with more assets (textures, sounds, music), but even with that input, there are numerous ways you could go with creating the new content that seem equally viable from the outset. It's not until you get others to experience your content that you start to get feedback, like, "wow, this is incredible!" or "this sounds very generic".

So yeah, throw away money, making coders spend extra hours bored in art class doing watercolor paintings, as if that's going to make England's creative output any better. People who are born to be creators tend to do whatever they love doing on their own, without having to be forced to sit in a class to do it. You really can't force creativity, or the "forced-ness" of it becomes obvious in the content that's been created. That's just the way it is.

And don't even get me started on the stereotype that "geeks" are lacking in creativity. Coding shops used to ask people in interviews what their creative outlet is, whether it's singing, playing instruments, drawing, etc. - and those who didn't have any to speak of were often passed over in favor of candidates who had a creative passion. I imagine that type of thinking is even more prevalent in game studios, though I've never worked at one.

Comment Re:Both devices value form over function (Score 2) 77

It's not true that the battery suffers the same kind of "charge cycle" whether you're charging it from 0% to 96%. For lithium ion batteries, there is no "memory" effect, but there is a "depth of discharge" effect. A deeper discharge will reduce the battery's maximum capacity more severely than a minor discharge.

It's not the act of plugging the battery into the charger that reduces its usable life; it's the process of actual charging. If you're doing less charging, your battery lasts longer. If you regularly drain your battery because you're under the misconception that all charge cycles affect the battery in the same way regardless of depth of discharge, you're actually making the problem much, much worse by discharging the battery completely.

In actual testing, the best results have been to charge the battery once it reaches 70 to 80% of its maximum charge level (as in, the max it can actually hold before the charging circuit cuts off, not the theoretical max that's advertised by the manufacturer). This depth of discharge doesn't really put much stress on the battery, and it doesn't generate as much heat as having it constantly plugged in, so it's a happy medium.

Real Users hate Real Programmers.

Working...