Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re: Heat tiles flying off? (Score 2) 70

I think I even see evidence this was not a tank leak. If whatever was in the payload bay came from the ship's tanks, it would have spread out uniformly quite quickly. The clouds we saw looked like clouds within another gas. In other words: this must simply have been air.

The tank leak (or failure to close a valve) scenario it not an unlikely one, though; it would explain the loss of attitude control and the failure to light the engines. However, I don't think it leaked into the payload bay.

Comment Duh (Score 1) 99

Of course ChatGPT isn't coming for my job. Instead, companies built around ChatGPT (or similar) will be coming for my employer and my job will simply disappear during the next economic shake-up.

Disruption hardly ever happens on the job-level. Also, it usually won't be predicted at the analyst level, which is proven by things like this:

More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering.

Reality is that there is almost no dull work left in proper software development. Any dull work left is due to legacy, stupidity, stubbornness or lack of access to proper tools.

Comment Re: not really (Score 1) 247

Actually, in most countries with actual laws, the exhaust from diesel vehicles is almost as clean as that from liquid gas powered vehicles and much cleaner than that from gasoline powered vehicles; it simply smells like steam.

Here, we're only reminded of how things used to be by import-dudes and their lawless diesel-vehicle but it won't be long before their cars are banished. They already are in most city centers.

The problem of unhealthy diesel fumes has mostly been solved ages ago.

Comment Re:Will the bill also ban black electrical tape? (Score 2) 132

The first approach you describe would be next to impossible to do without either
- compromising the CA certificates on your phone (and thus already having access to your phone, e.g. through malware or through the vendor),
- obtaining control over at least one such a CA (which likely won't suffice thanks to RFC 6844) or
- obtaining the private keys of the services they'd want to impersonate.
- compromising the software repository or other vendor services your phone uses

That would be a gargantuan effort and doing so at a large scale would be extremely difficult. It's much more likely they will attempt to compromise suspect's phones one at a time through more classic means, such as the one you describe as the second approach.

Comment Re:I wouldn't trust ChatGPT (Score 1) 137

I have three pieces of feedback.
1. You seem to be using GPT-3.5 not GPT-4.
2. GPT-3.5 actually does give the right answer these days.
3. GPT-4 does not write a Python program; it just lists the substrings and tells you what the highest number is.

There seems to be a lot of confusion about which GPT-version people are using, causing people to totally underestimate the power of GPT-4. And then there's the other half of people in this comment section that do use GPT-4 but in a rather naÃve way and at the very first incorrect answer decide that it is stupid while in fact with a less naÃve approach it would have blown them away. O

Just to be clear: if you're not sending dollars in the general direction of OpenAI, you're not using GPT-4.

Comment Re: Not ready for that yet (Score 1) 137

Yes. However, you only need a single prompt to make it behave like a software developer that follows a set of rules and then it just asks a product owner what it should create so there's not much to develop for that developer once that is in place. You also only need a single prompt to make it behave like that product owner...

Note that I'm not going the luddite rhetoric route here; I really don't have a clue what will happen once (and if!) this is set loose at a large scale. I can image we'll all just get a lot faster at our current jobs but I'm also very seriously considering the possibility of this thing automating my job and all jobs around it from start to end, within a few years.

Comment My experience (Score 1) 137

Unfortunately, I have no API or plugin access, but I've experimented a lot with ChatGPT-4 (you know, the paid version, do not confuse it with GPT-3.5 which you get in the free version). Contrary to many comments here, I'm pretty much convinced GPT-4 is technically ready to replace just about any software development business with a single person that's really good at writing prompts.

Now, I'm not going to convince you of that, but let me address one thing that I think is important.

Most of the comments here state that they usually just see simple examples and that the code it produces tends to be of junior level. Now, obviously, that is true but the thing here is that GPT-4 has a linear process of reasoning. It always sends you the very first draft and this confuses people because it is of a quality a human would never produce without planning and proofreading and therefore you'd intuitively expect it to have done exactly that. But is hasn't; for example, if it writes code for you, it will have to get any import statements or class member variables just right at the beginning. There's no adding an import when you need it when writing a method, which is what your average human would do. Considering that, GPT-4 is already well beyond human-level.

So to get GPT-4 to give better results, there are broadly 3 techniques I've found to work well and they all revolve around dealing with the linear reasoning thing.
1. Take it one step at a time, just like humans would. First have it get the big picture and then slowly drill down. Start with architecture and go on with high-level design, API design, test strategy before you tell it to start coding.
2. Prime it well. Tell it what it is (e.g. a senior software architect), tell it to ask for clarification when things are unclear, tell it things you might accidentally take for granted like coding style, approach to logging, naming conventions and doing test-driven development. Tell it to always inform you of potential improvements to previous work.
3. Continuously encourage it to improve on what is has done. Ask it to refactor the architecture a bit before starting on the design. And after it is done with the design, ask it whether it sees possible improvements to the architecture.

Now, obviously, that's a lot of work, but GPT-4 can do it all for you. Unfortunately, without API access or IDE integration, it's mostly a matter of copying and pasting between conversations, but by simply doing that you can easily get one instance of GPT-4 to play the manager and the reviewer and have the other one do the work. The results will blow you away. The speed won't, though (and that's probably why it's going to be a while before it will take over your average business).

Comment Re:Not ready for that yet (Score 1) 137

ChatGPT-4 can easily clean up the messy code it creates when fixing things. You just have to ask it. And that's key to having it make good software: have it go through the same process humans do. Start with principles, architecture, design, modularity, a plan for automated testing and then have it work test-driven on small components. Add to that humungous amounts of encouraging it to improve previous things based on new insights, refactor code, clarify and explain what it has done and it'll make quite good stuff. Note that GPT can easily do all of those things for you too. Especially asking it to improve is essential; due to the linear nature of GPT, it is not capable of getting back to stuff it has already outputted. But seeing the full picture, it can do it with ease.

Ideally, you'd set it up in such a way that the different aforementioned tasks are performed by different instances that act like a "senior professional in that field" and discuss the aforementioned steps amongst each other; this will create the aforementioned continuous feedback-based improvement stuff without user interaction. Without GPT-4 API access, a much more lenient question cap and IDE integration, that's a bit difficult to achieve for now, but simply manually trying the different phases and steps in such a process will give you a pretty good idea of what it is capable of, which in my opinion is way beyond "an inexperienced programmer who just graduated with average grades".

My first question for GPT-4 was asking it to write a browser-based Tetris. That actually worked somewhat out of the box but crashed as the first block reached the bottom. It took me ages of explaining how it crashed and it kept fixing _other_ bugs or changing things that were not a bug. Eventually, I asked it to explain its reasoning in comments, include logging and continuously asking it to refactor as it sees fit. That's when things started going significantly faster; especially logging helped a _lot_ in cases where I myself would have used a debugger. It still leaves a lot to be desired, but the resulting Tetris is here: https://zmooc.net/gpt-4-tetris...

Obviously, Tetris does not prove what I said before but unfortunately I cannot as easily disclose my further experiments with more complex stuff. I'm fully convinced a properly prompted set of GPT-4 instances with good browsing capabilities (I'm thinking about both automated testing AND looking stuff up) and IDE integration can take over just about all jobs in your typical software development company. Whether it actually will anytime soon depends on many factors, though, of which fear of the unknown and the utter lack of compute resources are probably some of the biggest; this is likely a great time to buy stocks in AI hardware manufacturers like Nvidia.

Comment Re:Simulation optimization (Score 1) 200

Being in a simulation solves nothing.

I disagree; taking into account that there's a very small chance that we live in a simulation can help solve questions because now we can think about them from the perspective of a computer programmer. What sets a simulation apart from a real universe it that it might contain artefacts introduced by optimizations. Retrocausality might be such a thing, a thing that's unlikely to exist in a real universe but not unlikely to exist in a simulation.

Also, your statement relies on the assumption that the universe the simulation runs in would have to have the same rules as the simulation does and thus would have the same questions. That's just an assumption. In our reality just about any simulation relies on a simplification of the reality it simulates.

Comment Simulation optimization (Score 1) 200

I did not RTFA and am not a physicist but just a random person with a vivid imagination but that has not kept anyone on here from joining in the discussion, so here it goes :p

It could just be some optimization in the simulation. It's kind of computationally expensive to update the position of every photon with each tick of the clock, especially since the vast majority of photons will travel in a straight line though empty space for billions of years. I can think of several optimizations but one of them could be to update the position of a photon less often if it is not near anything (assuming that's readily available information). This would cause some lack of precision that might manifest itself as retrocausality. It might also cause individual photons to sometimes disappear mid-air or make very high-speed objects transparent, both of which could probably be tested in a carefully crafted experiment.

Possibly answers to such fundamental questions will eventually come from our own simulation efforts.

Comment Let me translate this for you (Score 1) 44

Slashdot readers probably already understand this but your average news consumer will read this as something significant. In reality, what they're saying "in our graph, the line indicating the growth of the problem has reached some totally arbitrary angle".

It's just the derivative that's plateauing. We've been trying to curb CO2 emissions for decades. The moment we STARTED doing that, decades ago, CO2 emissions should have plateaued. But they didn't because humanity as a whole has basically done nothing so far. That's what this news tells us: our incompetent species might just be starting to do something but the signs are not very clear.

Slashdot Top Deals

To restore a sense of reality, I think Walt Disney should have a Hardluckland. -- Jack Paar

Working...