Comment Good luck (Score 1) 2
I've been looking since March 2024. Having no reasonable options in sight, have reopened https://informationr.us/
I've been looking since March 2024. Having no reasonable options in sight, have reopened https://informationr.us/
In this job search, Linked In and Dice- but MOST of my LinkedIn devolves down into one of the above quickly. The number of scammers on Linked In is truly awesome.
Google learned to embrace, extend and extinguish right out of Microsoft's playbook. They were excellent students and you can see the results in how email and web "standards" work today.
The difference is that when Microsoft did it the authorities eventually started getting in their way to promote more openness and competition again. So far there is little sign that anyone intends to challenge the way a few tech giants have recently been capturing long-established standards that we rely on for what have become vital services and effectively taking ownership for their own purposes. The governments and their regulators are either asleep at the wheel or, if you're a bit less trusting, bought and paid for.
Do you often use VeraCrypt on a company-managed device? I'm sure if you do then it's with the knowledge and consent of your IT department and they'll be responsible for managing any consequences of the VeraCrypt issue according to their official policy as well.
Is it just me or are these three platforms the arena of bad decision making in startup businesses? When somebody tries to lure me off of social media into one of these three platforms, alarm bells start ringing in my mind. If you're leading your business with communications on Signal or Whatsapp, just know that I for one will not be taking your business seriously.
Grok was constantly say it was doing something that it had ZERO ability to, and I kept calling it out and it kept apologizing and then immediately doing it again.
As a guy who spend 5 figures a year on Ai, the last thing I want is that. I know Claude and ChatGPT also do it, but Grok was doing it CONSTANTLY.
Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.
But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.
That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.
I was actually in college in the 1990s, but yes, a middle schooler today with python on a raspberry pi and a pretty simple GPS module could do this.
I didn't say it wasn't abhorrent or alarming. I'm presenting the scenario that this task of "defend this three dimensional coordinate box" doesn't require AI.
Yes, it did. The beacon signals weren't that good back then, neither were the sensors. I had the same problem in the fake robot battles I was involved in.
The answer turned out to be a solution not from Defense industries, but from Genie Garage Door Openers.
The robot doesn't care. The robot's job isn't foreign policy. The robot's job is "here's a box defined by this coordinate cloud, defend it"
Like I said, I programmed it for a fighting robot back in the 1990s. It ain't that complex, and with today's drone factory ships, the Navy can now output this level of AI in killbots at a rate of 10,000 a day.
This will not end well.
Why is this a problem?
I do not want my software censoring anything I make.
Function reject.