Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Verify. Not Sumarize. (Score 1) 67

(A Vector database can be used to store the changes and update the model's "approval" memory after it has been verified)

It could also be used to identify inconsistencies or commonly changed parts and allow some kind of voting or something, or take parts and offer different versions within the document, because gaslighting is real.

My point is, they need to innovate or risk dying right now. The old-school original editors aren't going to like it, but it's not a choice.

Comment Verify. Not Sumarize. (Score 1) 67

Why not use the AI to Verify the data on the page?

I looked at a page the other day for Red Alert 2: Yuri's Revenge. It was developed by Westwood Studios, which is what it said in the paragraph, but in the table it had EA Games, something completely different. I think we are being gaslighted through it.

Example: Reddit post about a game from 1995 here: https://www.reddit.com/r/comma...

People have all different years for when the logo originated. While Google AI says it was purchased in 1998, others say 2003.

If you ask me, it seems more like an Activision game... not normal for EA games at all.

Comment It's capable (Score 2) 18

AI is completely capable, it's just most of the implementations aren't the best. For example, one of the ones I use at work has a 10,000 character context window for files. It doesn't pick up on many of the fields that it should, and even having the information already it asks for it. Seems very late 2023/early 2024. I have a certification in IT Automation, which extends beyond that. I have been automating since I was in high school, when I automated my entire job and just played around learning. Automation is unfortunately usually seen as the enemy. Smart companies will invest in their employees while they have them hand over the knowledge for the AI to up-skill them, lest they destroy the economy in their city and end up out of business or having to relocate.

Comment Re:Meanwhile censorship of legal visa holders ongo (Score 1) 255

It's worse than that. They're censoring the internet. AI models are the last source of information, and even that can be filtered.

I was told my social media profiles don't even show up for anyone. My website? Same thing, because I use Cloudflare.

Can you access it? https://danielweisinger.me/

Comment The trend (Score 2) 24

Does anyone else see the trend?

- AI access your files
- AI accesses your search history
- AI accesses your email
- AI accesses your contacts?
- AI accesses your calendar?

Pretty soon:
Google: Find all _insert reason that violates policy here_ and suspend their accounts.
FBI: Read the federal law and find me all criminal activity, flagging their accounts. Include anything suspicious that you are uncertain about. Sort by their ability to pay for an attorney. (Since CiSA/CISPA made them able to access the IRS database + all the metadata they have without a warrant)


Also FBI: Let's be lazy and automate this, so we don't have to do anything.

*Shadow List is born*

All criminals suddenly drop under in the world, unless you're above the FBI.

Reminds me of a story... oh wait.

Comment Profiling (Score 1) 30

I asked ChatGPT to do a psychological evaluation and pretend to be the FBI, and it was creepishly on point.

So if a periodic task existed that had prompts like "Sift through this data and identify a score based on the following critera. Identify a likelhood that this data would result in terrorist activity. Also identify the likelihood the person is Pro-Life or Pro-Abortion, determine their gender identity / sexual orientation, and identify their race."

You think it couldn't? It's time to get out of the cloud. If you don't know what self-hosting is, now is the time to learn. Navigate to Reddit, find r/selfhosted, and ask how do I start.

Comment Why is this a trend? (Score 1) 43

I know companies are wanting more and more data, but I see very few privacy laws going into place to limit what the government can do with this data too. That's not even counting the companies themselves. If you think PRISM was a dangerous dragnet, what do you think all this internet giants sucking up your data is? I doubt it ever went away, they just hid behind the companies akin to the secret red room at Google, and the secret room at AT&T. I guarentee Facebook, Amazon, Google, every phone carrier, X, and essentially all other major tech players have a secret room intercepting data. With CISA/CISPA passing, that's unified data sources across every government agency.

If they haven't built it yet, they are building it. You think they wouldn't? I mean we literally advertise the "All Seeing Eye" on the back of our dollar bill. Everyone writes it off as a conspiracy, but that's what they set out to build forever ago. How old is the dollar bill? Just imagine.

Comment Hmm (Score 1) 162

I have a serious distrust with Cloudflare and how powerful it is. I use it, but I don't trust the "middle layer" of the internet that sits between so many websites and the end user. I use it primarily to mask my home internet connection, since it gives away location fairly accurately from what I have heard.

Comment How about open source? (Score 1) 56

I would love to see a law that required the release of new firmware that allows you to continue to run your own server for the hardware that no longer is supported. Kinda like how World of Warcraft had classic servers long before they re-released classic expansions. There is a community supported "private server" market. It would breathe new life into the products and make them not just e-junk that doesn't even get recycled.

Comment Re:Under What Authority? (Score 1) 33

The whole internet is in trouble. The open web is under siege by Hollywood and news publishers over AI. It's not a new battle, it's the same fight that pirates have fought for many years. Greed. You know these very same people will want AI priced out of reach for the average person so that they can hoard the power and wealth for themselves. We're already experiencing a roman catholic empire of sorts, and it will only. get worse as Ai replaces jobs and people become more and more dependent on the government to survive.

One thing is for sure, ChatGPT wouldn't cost $20/mo if it was trained by paying for the content it was trained on. $200 / mo seems to be what it would cost for licensed content. I mean, look at ChatGPT 4.5 vs o3-mini or even 4o. The prices are insane! There is no way it cost that much to train other than the fact that they had to pay license fees in the millions for content that was freely available online.

Comment Law Enforcement (Score 1) 62

The article mentions counselors, but how many are actually using law enforcement to police this?

Soon after I got out of school, not long after cell phones were allowed, it went from principals doing the discipline (plus security guards at night) to having officers on staff at all times. I watched it become a police state of it's own and this is no surprise to me.

I don't think the technology is inherently bad though, but should be approached differently. Let the AI engage the child and talk to it with privacy. Obviously not Google's, because it might tell them to kill themselves, but an LLM is more than capable of deescalating a situation by engaging multiple students about a situation in private, while informing them that they do have help available to them should they want it, in which case they can have the conversation sent to a counselor.

ChatGPT has handled the majority of psychology questions I have thrown at it and I believe it would be great at handling 3-way conversations and de-escalating situations like that if trained to do so. Maybe some software utilizing the API. Privacy is important as is not incarcerating them for bad decisions forever hindering their education.

On top of that, when communicating with other students a Moderation API should be invoked that talks to them about their message before letting it send, offering nicer ways to say the same thing while still letting them express their emotions, whether that be frustration, anger, love, etc.

All it looks like we are doing is preparing them for the surveillance that they will be facing in the real world not long after graduation. Google, the CIA, Facebook, the FBI, etc. all have, or will have, tools to psychologically profile you and raise red flags automatically. Thanks to CISA/CISPA that passed integrating government data a long time ago, it's all available for AI to consume. It's only a matter of time. The internet is no longer safe to use unless you live a certain life online or use end to end encrypted communications.

Slashdot Top Deals

Never trust a computer you can't repair yourself.

Working...