Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Dealers still will screw over customers. (Score 1) 58

Around 1999 I was doing some business with GM and was dealing with their top marketing guy. He had great hopes "this internet web thing" would allow GM to finally bypass the dealer network. He knew it would take many many years, but he so very much hated the dealers. He blamed many of GM's problems on them.

I see in these announcements that you are still effectively going through a dealer. These dealers will figure out a way to fuck over the customer. This is what they do. this is who they are. Fucking customers is their primary business. Selling cars is just a way to bring the customers to within their grasp.

Comment This is not about safety (Score 1) 42

The big AI companies are pushing for these rules, not because they give a shit about ethics and safety, but because they want small companies which are coming up with highly competitive AI to not be able to afford the gauntlet of regulations.

This level of reporting will no doubt be very expensive. But it will be a fixed cost that the majors can easily afford, but a startup won't. I suspect if your AI is somewhat innovative it will also run afoul of these regulations, and after being reported will generate very expensive investigations which will tie up and distract a startup.

Comment The big companies will do this to AI (Score 1) 261

The absolute last thing the big companies want is a bunch of us Jackasses doing our own LLMs. They will get the government to declare this dangerous, and without a doubt, the government will pile on the regulations which pretty much entirely restrict us to using APIs of major AI companies, and that will be it.

The only saving grace will be how hard it will be to define an AI. This will open up new avenues where people create things the rest of us will call an AI but the regulators won't.

Comment It sort of is, but for junior programmers. (Score 2) 99

I use Copiliot combined with ChatGPT as kind of a paired programmer.

Rarely does it do anything I couldn't do, and often doesn't even do as well as I could do. But it speeds my work right along doing the boring for loops, etc.

But where it really kicks some ass is in the super drudge work. Things like cooking up unit tests, and putting asserts everywhere. Making sure every conditional is caught, etc.

Some of these things are exactly what junior programmers are assigned, and where they learn. Paired programming is another huge opportunity for junior programmers to learn. Except I don't want to work with a junior programmer ever again. I can crap out unit tests, integration tests, and with these tools doing the drudge work, I can stay hyper-focused on the hard stuff.

Before this, a junior programmer could be helpful and occasionally had some cool trick or fact I could learn from. But now they are almost pure productivity-sapping distractions.

Another group of programmers are those rote learning algo fools. I can get these AI tools to give me just about any answer I need where it twists graph theory into knots. These people were mostly useless to begin with, but now are officially worse than useless.

And this is exactly where I see a big cadre of programmers getting shot. Junior programmers who will now largely go mentorless, and those rote learning algo turds who used to get jobs at FAANGS because some other rote learning fool convinced everyone that rote learning was good.

I asked ChatGPT what these people should do and it said, "They should go back to their spelling bees.... nerds."

Comment Worked in SCADA and this is the tip of the iceburg (Score 2) 23

I can't overstate how bad the security around most SCADA systems is.

A very common situation is the SCADA system is fantastically critical to what makes the company go; a factory, a refinery, a pipeline, a utility, etc. As we all know most IT departments have long ago lost the plot of why they exist, but in SCADA operations centers they often have kicked IT out. They run their own servers, they buy their own desktops for operators, everything. They might go back to their cubical and IT runs those computers, but often IT has little to absolutely nothing to do with SCADA including even provisioning networking as even this can be a bunch of weird.

This is a good thing in that there is no chance of the SCADA system going down and they have to get in line with the ticket system. The SCADA people will have their own experts who often deliberately live near the servers just so they can rush over in 12 minutes to solve any urgent problems.

But, it is also a bad thing because they aren't usually IT people. They are often some guy who programmed PLCs, then got into networking, and then was moved to the SCADA operations center. These guys often have a pile of knowledge covering a vast range of tech. A large distributed asset like a utility or pipeline could easily be 100+ years old and the equipment can pretty much cover the entire 10 decades of change. They may have paper tapes recording data in one place, modbus in another, MQTT in another, a bunch of proprietary communications protocols in another, acoustic modems, their own 1000km of fiber optics, satellite coms, some LTE, and on and on. In the server room there could be just about every OS in the last 40 years from VAX to a shiny new linux. The level of institutional knowledge one of these people typically has is insane. But what they often have no knowledge of is security. In this environment the very concept of regular upgrades scares the shit out of them. Often the software they use is super custom one off or low customer base software. Upgrades have a long habit of blowing things up. So, leaving a copy of windows NT 15 years behind is fine. Solaris 12 years since last upgrade is good. Nobody even blinks at a redhat install which hasn't seen an update in a few years. Why would you even think of upgrading the software on a PLC which controls something critical (as in blows up) if you don't have to.

Often they will have a few weird ass layers of VPNs and other crusty old security which they say is "bulletproof".

My theory is the reason these systems don't get hacked more is simply because most hackers don't know modbus, serial over UDP, are doing phone phreaking, or any of that. How many hackers know solaris? VAX?

Most of the industrial systems I have witnessed were the ultimate in security through obscurity; extreme obscurity. So this CODESYS thing is something that I laugh at. I don't know what product MS is trying to sell, but I can without hesitation say that the people in these larger industrial software companies aren't using CODESYS correctly anyway, and probably left a trail of SQL injection attacks (and other BS easy stuff) a mile long.

Like here is the level of stupid I can absolutely predict: If you look at the traffic going from almost any bit of their system to another bit there is a low chance it is being encrypted. If it is being encrypted they are doing it wrong, so the encryption is easy to break. Plus, basic security hygiene like ignoring repeated messages are probably not being done along with most message authentication. So, if you were to just repeat a message telling something to open a valve, it would probably just open the valve. But if you found some unencrypted messages and one of them read float for pressure which normal ranged around 100 and you set it to 100 trillion or something, their software would either happily ingest this new information and act accordingly (probably an alarm or a shutdown) or more probably, something would overflow and the software would crash. Or set values to 0 which never seem to go there and watch where they didn't do any divide by zero checking.

I know of one system where one of the parts of a communication structure says how long a following array will be. So it will allocate that much space. It is happy to try to allocate all the RAM on planet earth.

Comment Every day makes me wonder (Score 1) 85

It strikes me this isn't a very hard bit of science to replicate (or fail to replicate).

If I had to guess most university chemical engineering labs could cook this up in an afternoon.

So, if it is easy, then someone should have replicated it by now.

Or, it is really tricky to get right with insanely high purities, timing, etc. So, maybe people are failing, but reluctant to be the one to stand up and say, "Didn't work for me." not knowing if they failed to get it right, or it is bogus science.

But every day that goes by makes me wonder.

Fingers crossed it isn't BS though. If it isn't BS, then I hope people notch down respect for the ones who are saying this is "impossible'.

Comment nVidia is a bag of assholes when it comes to AI (Score 1) 18

nVidia even has some scare wording in their consumer grade GPUs that they "pose a fire risk" as compared to their datacenter GPUs.

If AMD wants to kick nVidia's ass, they need to do three things:
* Make a GPU roughly as good (it doesn't have to be better) than a 4090.
* Make a version of tensorflow that works with it for linux, windows, and mac.
* Give it gobs of memory. As in 24G or more.

This last is super important as the key feature of the nVidia high end cards is not their performance, but their memory size. Often the biggest performance gain in the better nVidia cards is from the larger memory, not the number of cores. If they went to 32gb, 64, or more, then they could crush nVidia like a bug.

I don't know if there is something inherently difficult, but if they could just put DDR4 slots for standard ram on the cards, then that would allow ML people to customize as needed.
For, as I said, often it is the GPU RAM which is the bottleneck which constrains what I can do. Even with the higher end cards the ram constraint is enough that I end up buying more cards just for that RAM. Going to multiple GPUs in a desktop or server is a right pain in the ass; both to physically install, but to configure. I've met well more than one ML person who thought they had multiple GPUs going only to find they only had one.

Comment Re:What should we do about it? (Score 4, Insightful) 76

Just a quick tip: by the time politicians get worried, something bad has already happened.

It's basically always the case similar to this: new technology comes along, it gets deployed, turns out bad things can happen too and happen in real life... only after the fact will politicians do something.

climate change is all about taking the right steps now or in the past to do what needs to prevent bad things in the future.

See how that doesn't align with politics ?

On the topic of fungi specifically, I would like to see a lot more vertical farming, hydroponics, aquaponics, etc. which on average are less susceptible to such things and also less depends on weather in general.

Comment Best lesson a CFO taught me (Score 1) 61

Years ago, I was programming a billing system for a largish company. The CFO taught me a very important lesson. She said, "Double entry book keeping can be used for everything that is zero sum. Use it everywhere and it will tell you where there is something wrong. For example, if 80,000 unique customers did a transaction and you don't bill non-transacting customers, then you need to make sure that you use 80,000 envelopes when sending out the bills. Not 79,999, nor 80,001." I started digging through all the databases an started finding all kinds of mismatches, many of which started saving the company hundreds of thousands per month. I then started adding stupid bits into my code which were turning up all kinds of interesting errors. While the code which produced the bills was churning away, I would have a running total of all debits, and another of all credits, and another of all the "totals" on each bill being generated. When I then added debits and credits it wasn't even close to the total of all the individual bill's totals. While the FTX numbers were really big, that is what 64 bit numbers are for. Any halfwit should have been able to create a simple report which would go on a single dashboard showing the present state of their system.

Comment It almost doesn't matter which tool you use. (Score 1) 296

I used react but was annoyed as a typical install would pass 50,000 files. C++ with VCPKG is no better. NodeJS, same, Even my embedded IDE will haul down 1000s of files if I add a library or three.

What annoys me is how many of these libraries are bloated themselves. I don't want their documentation, readmes, unit tests, and how they break down what could be a few dozen files into hundreds or thousands of files.

I don't even want to guess what is going on when I build a docker container.

Slashdot Top Deals

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...