Forgot your password?
typodupeerror

Comment Re:But they are the best of the best! (Score 1) 93

Let's go on the theory that they got into Harvard because they are the best of the best. If that were the case, then at most universities they should expect a top grade against the "lesser" students and why should they be penalized with sub-A grades just for being the best?

I think it's probably safe to say that there is pressure to inflate grades, and that such pressure comes from people who think that way.

And I know you know all this, but for the rest of the folks reading, realistically, most of them got into Harvard for one of three reasons:

  • They could afford to go to Harvard, and therefore applied.
  • They thought they were the best of the best, and therefore applied.
  • Their parents went to Harvard and convinced them to apply.

Note that all three of those include the word "apply" in one form or another. The ones who got in are presumably some of the best of the people who applied, with the caveat that there is a large pool of people who were equally good, but did not get in, because there is a limit to how many students they can take, and there is a much, much larger pool of people who were equally good, but did not apply, because they:

  • didn't have the money to afford it,
  • didn't perceive themselves to be good enough (impostor syndrome),
  • didn't want to live in the Boston area (B is for Boston, B is for brr),
  • didn't want to go to school with what they assumed would be a bunch of spoiled rich kids,
  • wanted to save their money for a good grad school, preferred to stay closer to home, or
  • were majoring in an area where Harvard is only middle-of-the-pack.

For example, in CS undergrad education, Harvard is tied with UC Santa Cruz down at #37. And UCSC is a short (though moderately painful) drive from Silicon Valley, which makes it more desirable for part-time employment. Harvard is a few minutes on the red line from MIT (#5), which at best makes it an easy trip to another school's recruiting fairs.

So I'll recommend The Tyranny of Merit by Michael Sandel (of Harvard). The more I think about it, the more I like his lottery ideas.

It's not a terrible thought. I'm not sure you'd see a meaningful difference in outcomes if you randomly picked from the top 20% of students nationwide and assigned them to Harvard versus carefully selecting with the level of rigor that they do. What would be really great would be if one of these schools randomly chose 2% of their incoming freshmen from the pool of all applicants, rather than going through the full process, and then compared outcomes.

Comment Re:It's all about definitions. (Score 1) 93

For undergraduate courses, there is just no way that the large majority of students can master the material to get an A if the course is being taught at a reasonable level. There is just too much of a spread of abilities.

Of course it's possible. It is exceedingly unlikely once the class size gets sufficiently large, but it is absolutely possible in small classes.

Consider an honors general psychology class where everyone is in the honors program and chooses to take that class rather than taking their A in the non-honors version of the course. If they do well enough to get an A in the non-honors course, there's no good reason to give them a B in the honors version of the course, because that just penalizes their GPA for taking a version of the course that covers the subject in more depth and breadth. Now assume that this class has ten students, all of whom would probably have gotten an A in the standard general psych course. Consider that the policy proposed would cap it at 6 As.

And even if you reject the idea that the honors classes should be graded similarly to the non-honors classes and want folks to wear an A in an honors class as some sort of badge of honor (why?), a small elective class still has a real risk of having a section some quarter/semester where everyone is really good or really bad. And just as you wouldn't want to assign As if nobody deserves one, you wouldn't want to deny As if everyone does.

Policies like this only make sense if you cancel any section that has a small number of students or exclude them from the policy. The smaller the sample size, the larger the standard deviation becomes. This is basic statistics (which I mostly picked up in Dr. Zachry's honors general psych). Any policy that doesn't take that into account is fundamentally flawed. Ideally, the grades for each class need to be evaluated with a t-test or similar against all of the previous sections of that class, taking into account the class size as though they were both samples of a larger population. And if that says there's too much difference between the mean/variance of one class and another, that *might* be a hint that the other class was graded unfairly, or it might mean that they're just smarter/better students. To find out which, you then need to compare the group of students' overall per-semester/quarter GPAs against that same metric for the other historical sections of the class.

Simplifying it to some fixed number makes it easy to write the policy, but it doesn't make it a *good* policy.

Comment Re: It's all about definitions. (Score 1) 93

In an elite school it doesn't seem there would be a whole lot of "year full of dumb people" happening.

In a given class, though, there will be variation. If your grade depends not just on how well you did, but on how well the other people in your class did, it's a fundamentally useless metric, because you can have one person who just happens to get into a couple of classes where half the people were valedictorian, and ends up with a B, while another person in the same year who takes classes in a different semester or ends up in a different section of the same class with different cohorts, turns in exactly the same quality of work, and gets an A.

Any sort of stack ranking makes grading completely and totally worthless, even when evaluating people who were at the same school at the same time. It literally tells you nothing more than that a particular student was better than the people in that specific section of that specific class.

This sort of stack ranking also creates a strong disincentive for smart people to take classes with a smaller numbers of students, because the variability in quality of students will be higher.

I would say that any sort of limits like this should be applied over a five-year rolling window, and including all sections taught by a specific professor. That way, a professor who is approaching the threshold can adjust the grading slightly overall to stay within the limits without excessively penalizing students in a section that has all really smart people.

Alternatively, you could provide an escape hatch where a professor can justify exceeding the policy on a one-off basis, but where it has to be independently reviewed, and if it keeps happening, it becomes a problem for the professor.

If you don't do one of those two things, then what you're doing is causing artificial grade *deflation*, which results in an unfairly/randomly biased ranking signal. And I'm of the opinion that doing so makes grades even less useful than their current questionable level of utility.

Or we could just acknowledge that grades are a poor measurement of students' ability in the real world and abandon them entirely, replacing them with pass/fail signals, where each subject area within a course must provide a pass signal for the class as a whole to be passing. Better yet, make it tristate: P, NMP, F, where P means it should count across the board, NMP means it is good enough to pass if it isn't a course in your major area, and F means it isn't good enough to get credit.

Let companies actually spend the time and money to interview more candidates to find out whether they are worth hiring instead of relying on noisy numeric signals as a crude filter.

Comment Re:The geothermal plant already exists [Re:MS Pow. (Score 1) 70

3. They're a reliable customer of power. That means that they will alway pay the bill, even if it is high. The grid operators and generation plant operators can charge them a huge premium for bulk power, then use that extra revenue to build more power plants.

I needed a good laugh, but that is exactly the opposite of how it actually works. They will be a discounted bulk price, or they'll build somewhere that will.

You're describing the situation in places where demand doesn't exceed capacity. I'm describing how any sane person running an electrical utility would bill things in a place where a company wants to put in a data center that exceeds available capacity. They'll hit them with capacity charges based on their usage during peak demand periods, or they'll make them pay for capacity improvements as part of the connection charge, or both.

And the "or they will go somewhere that will" part shouldn't really be a concern. They want to build there. The generating capacity isn't adequate. They either pay whatever fees are necessary to ensure the stability of the grid or they don't build. If that cause them to build somewhere else where adequate capacity exists, that's also fine.

Comment Backwards from what they think? (Score 5, Insightful) 63

Given that big companies have already made it clear that they think AI will let them do the same work with fewer people, and given that using AI costs the company a lot in terms of compute resources, it seems intuitively obvious that the only reason execs would want to encourage more AI use is to find out what jobs can easily have their headcount reduced by more use of AI.

The people using the most tokens are the ones for whom more of their jobs can be most easily automated. This is not, IMO, a positive sign for the long-term survival of that particular job role. The only rational response is to use AI just enough to show a speed-up, assuming the speed-up actually happens at all, but not enough to be high up on the chart of AI users. Using it way more than that seems self-defeating.

Comment Re:The geothermal plant already exists [Re:MS Pow. (Score 4, Interesting) 70

The summary says that this thing is supposed to be geothermal powered. So they just have the cart before the horse here. They need to set up the geothermal power plant first, then build the datacenter after the power plant is operational.

The geothermal plant already exists: https://www.globalelectricity....

Apparently, Microsoft was proposing to build the data center there and tap into the existing geothermal power, not build new geothermal power (the summary was a little confusing about that).

Yeah, that was confusing. But Kenya's president is almost certainly wrong. Here's why:

1. It is not numerically correct, assuming the numbers in the summary are accurate. The country has a surplus adequate to power the data center at somewhere around half to three-quarters capacity even at peak power use, and probably at full capacity for 99 days out of 100. So even if they built it at full capacity right off the bat and did nothing else, you'd still only lose power to a small fraction of Kenya occasionally.

2. They're not building it at full capacity. They're building a small data center at first, then building it up over time as more generating capacity comes online.

3. They're a reliable customer of power. That means that they will alway pay the bill, even if it is high. The grid operators and generation plant operators can charge them a huge premium for bulk power, then use that extra revenue to build more power plants. By the time the data center is running at full capacity, they could have more than enough power to power it.

4. Even if that extra investment in production doesn't happen, they can just refuse to provide the additional power from the grid. I'm sure Microsoft knows how to do solar + storage by now, and if not, they can pay someone to do it for them who does. Or they can build their own geothermal plant right next to the existing one. Or they can do any number of other things to produce power, like installing an SMR.

5. Nothing inherently prevents them from reducing power usage during peak load periods. Service will get slower, but should gracefully degrade, assuming they're doing it right. Nobody will lose power, realistically speaking.

It is unfortunate that so many people look at these data centers and the current worst-case state of resource availability and conclude wrongly that they are infeasible, but this is a common mistake made by planners, legislators, and members of the general public. They fail to account for how the existence of the data center with its need for resources will trigger the production of facilities to exploit previously unusable resources and make them available, and they fail to recognize that in a true power emergency, they can just turn 90% of it off and shift the load to other data centers.

But the reality of the matter is that nobody is going to build a gigawatt of additional power capacity in Kenya unless the government or some private company that needs power pays them to do it. They already have a 23 to 30% surplus compared with their worst-case power consumption. That means that adding more production will just drive power prices down, so they'll get less money for the power they produce.

But as soon as someone like Microsoft starts needing enough power to pull those margins down, suddenly additional capacity becomes economically feasible, and you'll see either existing power companies expanding or new power companies entering the market. And the existence of an all-but-guaranteed higher future demand is the key to making that happen. Without the data center being approved, that motive to expand does not exist, and the grid will likely stay at or near its currently levels unless the government forces the hand of the market by paying someone to build more generating capacity.

Comment Re:What ... (Score 1) 105

too many folks are still stuck on IPv4

Printer is IPv6 only?

What I'm saying is that if everyone had IPv6 in their homes and offices, remote access wouldn't require all the silly cloud server games. You could just hit the device directly by its IPv6 address, and assuming your router suppoerts UPNP pinholes, you're done. You'd need dynamic DNS and that's it.

I can understand the remote printing (not on the same network) part. But only up to the point where something jams and I'm not there to yank the plug and untangle it before it gets hopelessly borked.

An emergency stop button in the app should be able to do the same thing. If that's not possible, it's a rather bad design flaw.

Also, if something jams in a way that could cause meaningful damage (beyond having to brush blobs of filament off of the hot end) and the printer doesn't detect it, that's also a rather bad design flaw.

Comment Re: All according to plan. (Score 1) 212

I have an F-150 Lightning. It's 2 $200 parts to convert from NACS->CCS1 (one for DC, one for AC). The connector type doesn't matter. CHAdeMO requires an adapter that costs thousands. It's not comparable.

CHAdeMO to Tesla adapter: $565. If adapters in the reverse direction from NACS to CHAdeMO cost thousands, it's because the market is too small to achieve economies of scale. Yeah, you need some active electronics to negotiate the protocol, whereas NACS uses the CCS protocol, so you can do it with a passive adapter, but the actual DC is still DC.

Comment Re:Stop purchasing Bambu products (Score 1, Insightful) 105

I like their products. I just want printing without fuss and without having to learn every detail about leveling, etc. Their product works for me and I do not care about its openness, it is about as important for what I need it as my headphones being open sourced (not at all). So this product is for my use case, not for people who want to control every aspect of their printer and every software feature.

The problem is that their model works until it doesn't.

Having a good out-of-the-box "it just works" experience doesn't preclude letting people tinker. If anything, letting people tinker results in a better out-of-the-box experience in the long term, because the manufacturer can see what people are doing with their technology and can clean it up and make it more broadly available if it is useful. The key is ensuring that the default experience doesn't require tinkering for the majority of customers. And seeing people tinker shows you where the sharp edges are that need to be polished.

But more than that, locking down this sort of hardware means that when you inevitably run into some limitation, if the manufacturer doesn't provide a way around it, you're stuck. And the problem is that a lot of users of advanced tools like this are in a situation where 90% of their use is common to all other users, and 10% isn't. And different users have different 10% use cases. So you could be in a situation where 80% of your users need one thing that your product doesn't support, but it's a hundred different "one thing"s. This makes support very difficult if you don't allow tinkering.

But the worst part is that you can't know for sure whether you're going to be in that 80% until you run into the use case that they don't support out of the box. It could be a week, a month, a year, or several years. And then you're stuck with this hardware that won't do what you need, with no way to fix it, thus forcing you to replace it with a product from some other manufacturer.

So even if you don't think you will ever want to tinker with your 3D printer, assuming all else is roughly equal, you're better off choosing the printer that gives you the most control of the hardware, because that is the least likely to box you into a corner and make you regret your purchase someday.

Comment Re:Can free ICQ clients use ICQ servers, reloaded (Score 1) 105

Same discussion as 30 years ago with open source clones of messaging apps such as ICQ. The open source client pretends, on those days through reverse engineering, to be the official client. Ultimately, it was okay then, because it was beneficial for the operators to have a larger network of users who can talk to each other. Does this dynamic apply here?

I'd have gone with "Every web browser is Mozilla", personally, but yes.

If you're using a user agent for any sort of security purpose, you're not just doing security wrong; you're doing security so wrong that somebody is going to write an entire book as a postmortem about your company.

Moreover, if your service can't handle the traffic of a mere thousands of clients (four-digit QPS) hitting it at once, you have much bigger problems than security. I forgot how to count that low a long time ago.

Finally, the elephant in this room is that those "unauthorized" clients are YOUR USERS. They are people who bought YOUR HARDWARE and want to use it with your service. Basically, you're flipping off your paying customers. That's the fastest, easiest way to ensure that you don't have any of those anymore.

Comment Re:Stop purchasing Bambu products (Score 3, Informative) 105

Threats of lawsuits (especially to open source products, which do not have deep pockets) are the new corporate approach to what would appear to be appropriate reverse engineering. The only way forward, if you disagree, is to refuse to purchase any Bambu products.

Already done. When I was choosing what 3D printer to buy to replace my aging Snapmaker A350 last year, I read about Bambu's questionable commitment to openness, and decided to buy a Creality printer (K2 Plus with CFS) instead. Over the year that followed, I bought a Creality Hi with CFS as a second printer, plus two additional CFS units, a filament dryer, a spare Creality tool kit (since the Hi doesn't come with one), and more than half a grand worth of filament.

I've personally spent well close to $3,700 on Creality products in the last year (not counting third-party filament and the DXC2 extruder upgrade) precisely because Bambu comes across as being a bunch of litigious a**holes who are trying to lock down their products and prevent users from being able to modify the hardware that they bought.

As far as I'm concerned, they've dug their grave in the 3D printer market. Stick a fork in it. They're done.

Slashdot Top Deals

DEC diagnostics would run on a dead whale. -- Mel Ferentz

Working...