Comment Re:Yes, and it's even worse than that... (Score 1) 75
It is illegal* to ask if candidates are married.
It is illegal* to ask if candidates have children.
It is illegal* to ask if candidates live with their parents.
* In America.
It is illegal* to ask if candidates are married.
It is illegal* to ask if candidates have children.
It is illegal* to ask if candidates live with their parents.
* In America.
Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.
But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.
That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.
(+1, Truth)
Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.
We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?
America has about a billion acres of farmland.
40k is 0.004%.
But the 40k is not all in America, and is not all farmland.
Vendor lock in.
GnuCobol works fine.
It is the Microsoft way.
There's a difference between not using AI tools at all and not using code generated by AIs.
The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.
I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.
I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".
The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.
If AI translation is good enough
Pure AI isn't good enough, and that isn't what most people will submit.
AI makes mistakes and humans make mistakes, but they make different mistakes. Humans make grammar and word choice mistakes much more often than AI, while AI makes factual mistakes.
A researcher can run the original through an LLM to produce a well-written, grammatically correct translation. Then they read it and correct the factual errors. Researchers who can't write English well can usually still read it well enough to catch factual errors.
If they do this in America
America already does it. Several states ban social media for teens. Utah was first. Other states followed.
will they do it with the same caution and thoughtfulness I wonder?
Most states with bans leave it up to the tech companies to figure out how to verify age.
A junior sales rep costs $150K a year?
That's the cost to the business, not the salary to the employee.
The cost of payroll + benefits + management overhead + office space + heating/cooling + parking lot + etc, is typically twice what the employee is paid.
An AI doesn't need any of that stuff, doesn't take vacations, or sick days, or weekends, or need sleep.
many people have no idea who has their data.
Why should I care?
What bad thing will happen to me if some random business knows what brand of tea I drink?
STOP assuming every website should function perfectly on a fucking 2” screen.
30 million American adults use their phone as their primary means of accessing the Internet.
Half of those don't even own a computer.
So, yes, websites offering basic services dang well better work on a small screen.
I never really understood rebranding at all. Take what your customers know about your brand, and throw it in the trash.
A Dell customer might think about the brand only when considering a new laptop.
But the marketing department sees the brands a dozen times a day, gets tired of them, and lobbies for a refresh.
Rebranding is almost always a mistake.
Nothing succeeds like the appearance of success. -- Christopher Lascl