Comment Re: Hell hath frozen over! (Score 1) 102
Oh, and a "train" is a bit like 100 cars back to back.
Oh, and a "train" is a bit like 100 cars back to back.
Anyone expecting corporations to not try to make a profit and extract maximum value for their shareholders ignore that that's their fiduciary duty.
"this belief is utterly false. To quote the U.S. Supreme Court opinion in the recent Hobby Lobby case: 'Modern corporate law does not require for-profit corporations to pursue profit at the expense of everything else, and many do not.'"
https://www.nytimes.com/roomfo...
"We
> Am I the only one that can't imagine any possible value an AI assistant would bring to a game?
I use AI assistants lots when playing games!
At the moment it's Minecraft. I want to figure how to build something, e.g. a golem farm. I look for tutorials online but (1) they're all videos which I hate watching, (2) they're all hyper-specific and concrete, "place this block here then that block there", but what I want to understand are the foundational principles so I can know how to adapt the golem farm to my own purposes -- what are the mechanics, how do they spawn, how does water flow, what is the SOLUTION SPACE of possibilities.
Gemini AI has been really good at this kind of thing.
The other time is when I get stuck, or want advice on how to make a character build to achieve a certain end. Once again the online advice is typically in the form of "walkthroughs", do step 1 then step 2 then step 3, in other words just one possible way to play the game, and it's too easy to accidentally read too far and spoil the rest of it. I don't want that. I like the feeling of openness and possibilities. I again ask Gemini, and it gives me advice on just the particular bit I'm stuck on, and is better at showing for me the available options.
Lua remains the commonest choice today for games to offer scripting/modding. It's pretty much the industry standard. (Outside C# for Unity).
Fox News is just about always truthful. You just have to watch out for the tricks they use (on 95%+ of their stories)...
(1) non-representative selection. Headline "illegal immigrant murders local mother", which is true in this case, but they don't report the other 99 murders that went by immigrants, and don't report a general trend of immigrants causing less crime overall per capita. (I made up this specific example to illustrate their trick)
(2) report quotes: headline "Biden's senility was covered up, says person". They are 100% factually reporting that the person did indeed say this.
In both cases the reader is left with an untrue impression despite the stories containing only truth. It's because it's not the whole truth.
The irony is, this article itself was AI-generated slop with ridiculous duplication. Maybe a low-effort AI-assisted piece by an author who couldn't be bothered.
The hallucination problem _cannot_ be fixed. It is a fundamental part of the mathematical model.
I think it can. I've been working on getting an LLM (Claude Sonnet 3.7) to add missing type annotations to python code. When I naively ask it "please add types" then like you said it has about a 60% success rate and 40% hallucination rate as measured by "would an expert human have come up with the same type annotations and did they pass the typechecker".
But when I have a much more careful use of the LLM, micromanaging what sub-tasks it does, then it has a 70% success rate, and 30% rate of declining because it didn't have confidence to come up with an answer. Effectively there were no more hallucinations. (I got these numbers by spot-checking 200 cases).
So I think hallucination can be solved for some tasks, by the right kind of task-specific micromanagement and feedback loops.
OpenAI has $12bn annual revenue, about 3% that of Apple, about $3million per employees per year (compared to $2 million per employee per year at Apple).
I think OpenAI has a huge amount of growth potential even just from predictable growth over the next several years, even if steep changes towards AGI don't come.
All good in theory, except that you likely need something like a 200" TV so actually tell the difference between 8k and 16k.
Like I said, I figured 8k would be enough resolution for soccer. As for 16k, I imagine that something with bandwidth for 16k would translate that bandwidth into twice the frequency for 8k, which would be ideal for soccer.
[Lawrence of Arabia] Let me guess, you are watching these classics at 1080p, or at best 4k.
I watched Lawrence of Arabia on a Cinerama screen. It was breathtaking. I expect that the higher resolutions described here will help more places (like movie theaters) display higher quality prints. I suspect they'll open up new avenues like fake windows or full-wall screens in residences.
Do you watch soccer? 4k resolution means a player's head is about 14 pixels high, not enough to make out much beyond a blob of color; their jersey is 60 pixels high, enough to make out the number but not much more. Doubling the vertical resolution (i.e. going to 8k) would likely be enough to let you make out similar detail to what you'd see in real life. (Frame rate is another issue: HDMI 2.0 allows 4k at 60hz which is too slow when panning in a soccer game; HDMI 2.1 allows 4k at 120hz which is probably enough). I think that 16k is probably the right bandwidth to get soccer looking good.
Do you do VR? 4k per eye isn't good enough for VR yet. It's possible that 16k will be, but we might still need more.
Do you watch the gorgeous film classics like Lawrence of Arabia? One of the (many) things that make it look great is that it was shot on 65mm, equivalent to about 12k resolution.
The military team scrub *alerts about objects *.
Three days later, the full unscrubbed image is shared with astronomers.
I guess they don't and can't care about stable orbits, but they want to retain surprise for dynamic situations.
It's easy to regulate AI art the state level.
"Any job offer for a job based in California must adhere to the following AI disclosure".
"Any mortgage offered in a Californian property must satisfy the following AI disclosure"
etc.
AI regulation need not be about regulating AI innovation; it's enough merely to make sure it's applied fairly. And almost all real-world applications are indeed local.
Does MS not have such agreements in place?
I used to work at Microsoft. My employment contract specifically called out a load of personal pre-existing projects, plus ongoing and future ones, and stipulated that MS would have no ownership nor claim. I did ask for these callouts, but they were happy to go along.
I'm a software developer. Part of AI is like if I had 200 interns working for me -- some of them smarter than me and already more knowledgeable about some areas, some of them not, none of them familiar with my team's codebase. There are real cases where I could get those 200 interns to do real useful work and would want to! e.g. if I create a very detailed playbook of how to make certain code improvements, ones that wouldn't be worth my time to do myself one-by-one, but if I had 200 interns and an automated way to verify that they did a good job, then sure!
The article says "manage a team of AI agents". Managing in this sense isn't like managing a human; it's like writing a shell-script to manage some bulk process.
Is there a practical home-use for an 8k monitor/TV?
I think there is for sports. Watch soccer on a 4k TV. The camera is usually pulled back far enough to see a lot of the field, so each individual player on a 4k screen (3840x2160) is about 150 pixels tall, and the number of their jersey is about 30 pixels tall. That's usually not enough for me to make out what's happening. I can make it out better live in person. An 8k screen I think would be enough to make it out. I'd sit closer to it than your 8' if I wanted to watch. (Likewise, at IMAX I like to sit about 5 rows from the front so the screen fills my peripheral vision).
The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.