Forgot your password?
typodupeerror

Comment Reads like government astroturfed journalism (Score 1) 55

"I arrived on April 28, 2025 with a dream and not much more, maybe a couple bags of clothes," Lieber said of his move to China at a Shenzhen government conference in December. "Personally, my own goals are to make Shenzhen a world leader."

SMART last year appointed Lieber as an investigator, according to a post on i-BRAIN's website dated May 1, 2025. That news was covered by some media outlets. The same day, i-BRAIN said Lieber had also been appointed its founding director -- an announcement that went unreported at the time. This story is the most comprehensive account of Lieber's activities since he moved to China. Reuters is reporting for the first time that his lab has access to dedicated primate research facilities and chip-making equipment; that it sits within a sprawling ecosystem of state-backed institutions bankrolled by billions of dollars in government funding; and that it is housed within an institution that is luring top scientific talent back from the United States.

SO:
He moved to China at least a full year ago.
Five months ago he spoke in front of a crowd at a conference.
The news of his appointment to the company was covered in the media.
This current story is not the first account of his move, it's just "the most comprehensive... since he moved".
This is just the first time Reuters has bothered to report... [insert list of scare phrases where straightforward facts that are exactly what you'd expect from every engineering/tech research operation on the planet with regard to the scope of its research.

Then, ask yourself why this piece of "news" exists, when its informational content isn't even really news. Most of it is old and already publicly available.

This reads like a piece of breathlessly-fearmongered agitprop someone in the MIC fed to a Reuters asset so it will be shared across the Internet and remind Americans once again that we are surrounded by terrifying external enemies and need to keep the $1.2billion/each rockets and drones rolling off the assembly line.

Comment Re:Gatekeeper: "Oh you mean THIS expedited process (Score 1) 56

Snotty reply aside:

So, if a Process can be done without altering Quality_X, then it necessarily follows that the Process can be done without lowering Quality_X.

.

.
Why don't you want a faster process that maintains the same protective threshold?

Because we are not sure that Speed is not somehow related to Quality_X. In a negative manner.

Why aren't you sure, when it's in the article? The article's quote specifically says that Speed "does not alter scientific or regulatory standards".
That's the entire point.

This treatment's review process will go through the exact same unaltered scientific and regulatory standards as every treatment's review process.
This treatment's review process will not be less scientifically or regulatorily rigorous than every other treatment's review process.

This treatment has received an expedited review.
Speed is the only quality of its review which will be different.
Its scientific and regulatory qualities will not be altered.

It's puzzling that you keep trying to attack me as if I'm saying I think there should be no standards for this treatment, when I haven't said anything whatsoever about whether this treatment should or shouldn't receive a faster review, a stronger review, a weaker review, or any other aspect of its review.

All I have done is repeat what the article itself says -- this review will follow the exact same unaltered scientific and regulatory standards as every other review for every other treatment. It will merely happen faster.

Comment Re:Gatekeeper: "Oh you mean THIS expedited process (Score 1) 56

With them classed the way they have been it is very difficult to legally do much research at all, even while approving safety and efficacy would be the same as any other drug. Legally buying/transferring enough to legally give to volunteers who will need paperwork should they get drunk and screened by the police and set off a christmas tree of testing results all creates a lot of overhead.

If that overhead can be reduced/expedited without altering the safety or regulatory standards, then why does that overhead exist in the first place?
Who are the people who want lengthy overhead delays which are not part of maintaining safety/regulatory standards?

Comment Re:Gatekeeper: "Oh you mean THIS expedited process (Score 1) 56

without lowering the scientific or regulatory standards

Wrong. TFS states: "it does not alter scientific or regulatory standards."

I realize that all you stoners are waiting for the law to allow Dr. Feelgood to dash off prescriptions for all you "study participants". But let's just hold our horses until some real science is done on this stuff.

It appears you did not understand. This is very straightfoward Logic 101 propositional reasoning.
"Lowering" something means to "alter" that thing in a downward way.
Therefore, if you lower something, then you have altered it.
Therefore, if something has been not-altered, then you definitively know that it has been not-lowered.
So, if a Process can be done without altering Quality_X, then it necessarily follows that the Process can be done without lowering Quality_X.

EXAMPLE:

You are a frequent business traveler.
It normally takes your Uber 40 minutes to drive you to the airport in rush hour traffic.
One morning you are getting into the Uber and the following conversation happens.

DRIVER: "Going to the airport? Did you hear on the news the government shutdowns are causing long delays at TSA today?"
YOU: "Oh crap! Usually it only takes me about 20 minutes to clear security. I'm running a little late today and my flight boards in 60 minutes! I have an important lunch meeting on-site that I cannot miss."
DRIVER: "Well, I can get you there in 10 minutes instead of 30."
YOU: "Hehe, uhh, that sounds great, but I don't want you to break the law and end up getting pulled over or get us in a wreck."
DRIVER: "Not to worry. I can get you there in 10 minutes without altering our safety or legality."
YOU: "Then make it so, Number One!"

Therefore, the driver can get you to the same end result, in one-third of the time, and it will not lower your level of safety or compliance with traffic standards.

So, if the timeline of this review process can be significantly lowered without needing to alter the scientific and regulatory standards it must meet to gain approval, then why was that timeline so lengthy?

Why don't you want a faster process that maintains the same protective threshold?
Why do you want the process to be slow, when slowness is not necessary to maintain safety/regulatory standards?

Comment Re:Tablets in restaurants safe or not? (Score 4, Insightful) 62

If you're exposed to restaurants enough to an extent that this would develop behavioural difficulties I think you have bigger issues that need to be addressed. Kids will be exposed to screens, it's a question of dose.

The point is that adults who give their kids a stim drug to keep them vacuously quiet in a restaurant (because that's what a tablet does - it produces, via visual/auditory/haptic stimulation, an internal neurochemical change and consequent behavioral pattern as if you had dosed your toddler with a combination of alprazolam and an amphetamine), are the same adults who are stim-drugging their kid with that screen in the car on the way to/from the restaurant, on the pew at church, in the shopping cart at the grocery store, on the train to grandma's house for Christmas, in the waiting room at the doctor, in the evening when the adults get home from work and need some time to take care of their own daily needs, etc.

Each of those situations is understandable. Just like every game theory or economic scenario consists of large groups of perfectly rational choices, which collectively result in pervasive systemic negative consequences that are far worse than the sum of the individual choices. I think saying "kids will be exposed to screens" is akin to saying "kids will be exposed to nicotine". Hm, actually, yeah, that tracks, because if I gave a toddler a cigarette it would also help neurochemically pacify/sedate them while I celebrate my birthday at Cheesecake Factory.

Why would you choose to bring a toddler to an entirely optional environment if your toddler is incapable of being in that environment without you drugging them? Is it so you can enjoy the experience of being at a sit-down restaurant? You getting to have tableside guac and a skinny marg with the other wives is worth drugging your kid and distorting their neurological development?

It's a missed learning opportunity. Childhood is a process. The entire point of the process of childhood is to develop the self-regulation that will allow them to navigate the world. Self-regulation is a tremendously complex art, composed of thousands of soft skills that allow you to maintain yourself while:
being in unfamiliar physical spaces
being the center of attention
not being the center of attention
interacting with your family
interacting with strangers
listening to others while they talk
processing external stimuli and filtering for relevance (your table vs other tables)
adding something to the conversation
assessing your level of hunger and satiety
using cups, plates, spoons, napkins
the list goes on for pages and pages.

Yes, you could be reductive and say missing any one instance is not a big deal. It won't hurt them to give them a stim-drug so dad can watch The Big Game with his buddies at Buffalo Wild Wings. These skills do not programmatically pop into your head at age 14. Humans are not spiders. We do not live to instinctively build webs based on inherited firmware. Humans are cultural animals. Childhood is when these skills are acquired, via acculturation. And they are acquired through a million instances of practice. Children are not spiders running a program of "Climb up, find space, squirt web across space". Children must actively, repeatedly, and progressively encounter and confront their external environment and their internal sensory state. If you do not give your toddler the gift of a million instances to practice and develop self-regulation and navigation, you are choosing to reduce their future functional capacity. You are choosing to acculturate them to a dependence on external stim drugging. You are choosing to numb them to their own bodily sensations of hunger, thirst, boredom, comfort, pain, safety, danger, etc. You are choosing to retard their development of internal resilience.

Why come to a restaurant and bring them out into the outside world if you are going to then administer a nerve block that prevents them from actually encountering and processing that outside world?

But this conversation is so hard to have, because raising and teaching kids is hard. It is where you will need to call on every ounce of resilience and self-regulation and navigation skill you yourself have managed to develop. It's the hardest job you'll ever have. No breaks, no comp time, poor training/documentation, abysmal administrative support, a seemingly endless march of costs on a terrifyingly uncertain budget, chaotic policy/procedure changes they way you did it for the past 2 years suddenly no longer works today and never will work again. And your PHB is literally a child who has absolutely no understanding of the kind of pressure you're under and what you have to do to keep this operation glued together, but their ignorance doesn't stop them from making constant demands on you anyway.

Nobody wants to be the a-hole who walks up to a table and tells another parent how to parent their kid. And there's nothing - not even political/religious debate - that makes you defensively angry faster than someone telling you you're parenting wrong.

Comment Gatekeeper: "Oh you mean THIS expedited process?" (Score 1) 56

"The voucher expedites the timeline only; it does not alter scientific or regulatory standards,"

If you can expedite the process without lowering the scientific or regulatory standards, that should immediately raise the question of why your process was so slow in the first place.

The quote carries the exact same meaning as: "Our normal timeline contains bureaucratic delays that do not improve scientific or safety outcomes".

Submission + - Ransomware is getting uglier as cybercriminals fake leaks and skip encryption en (nerds.xyz)

BrianFagioli writes: Ransomware activity jumped again in Q1 2026, with 2,638 victim posts on leak sites, up 22 percent year over year, according to ReliaQuest. But the bigger shift is how messy the ecosystem has become. Established groups like Akira and Qilin are still active, while newer players like The Gentlemen surged into the top tier with a 588 percent spike in activity. At the same time, questionable leak sites such as 0APT and ALP-001 are muddying the waters by posting possibly fake breach claims, forcing companies to investigate incidents that may not even be real.

Meanwhile, actors like ShinyHunters are showing that ransomware does not always need encryption anymore. By targeting identity systems and SaaS platforms, attackers can steal data using legitimate access, often through phishing or even phone-based social engineering, and then extort victims without deploying traditional malware. With a record 91 active leak sites and faster attack timelines, the report suggests defenders should focus less on tracking specific groups and more on stopping common tactics like credential theft, remote access abuse, and large-scale data exfiltration.

Comment Re:Never got the hate (Score 1) 79

show me all the businesses matching the specific keywords I entered

Most users are not good at keyword prompting and that has been the driver of Google's products "knowing better" than the users for a long time.

You're correct about that. I spent the first 10 years of my career in research. It's atrocious the way attempted mind-reading - albeit derived from statistically valid analysis - has completed destroyed (via deliberate enshittification) search/discovery functions.

See, the problem with mind-reading in current-gen Search is the same problem as the mind-reading in current-gen "AI". It's just statistics all the way down. Current-gen Search has been steadily "trained" on the past 3 decades of user data. And that data includes trillions of search attempts from average people with poor spelling, limited vocabulary, unclear phrasing, zero knowledge of Boolean operators, etc. In one single population generation, we dramatically reversed ("in Soviet Russia" meme style) the paradigm from "Computers are dumb tools that smart humans can figure out how to use" to "Humans are dumb tools that we need clever computers to figure out how to use".

So yes, Search and AI has become exceedingly excellent at using its training data to find (and, increasingly, create) the centered slices of Bell Curve where its owners can maximize their monetization. However, the very same process which makes it so good at surfacing good results to a query like, "where can i renu my vehical restration?" fundamentally requires that it start from a presumption that the user either doesn't know what we want, or that we are wrong about what we want, or that we don't know how to correctly ask for it. The shift in focus helps the owner class more efficiently capture and monetize the Bell Curve. but the nuance has real consequences -- product development is no longer based on asking the question "What do users want to do" but instead is "What do we most want users to be able to do". This shift now shows up more and more in overall UI/UX trends where software updates (which are now a constant, largely invisible treadmill thanks to everything going cloud/browser based) completely rearrange nav, prominently center features people don't want, and in the process break or entirely secret-police Disappear many features people needed.

I am excellent at constructing effective, specific query strings. More and more search engines blithely ignore that -- it's not "here are the best results based on what you entered" but "we took what you entered and translated it into something matching a statistical analysis of what most other people look for when they put those words in a box, so here are the results most other people (and our business partners) want". I've become accustomed to the way all modern platforms show me what they want rather than what I want.

Sure, I am capable of rolling my ongoing understanding of how the system works, in order to update my search behaviors to find the increasingly small needle in the increasingly large, repetitive algorithmically-generated haystack. But there's the rub:
The instant you, as a human, have to start behaving differently in order to get around what you see the Prediction Engine offering you, then... by definition the Prediction Engine has failed at its one job. The system is now positioning YOU to take on the labor... of being a Prediction Engine... to predict their algo... on your own behalf... while they continue to collect revenue as if their prediction had succeeded.

within the specific area I searched.

Area based search is only context relevant after moving the map. This is also a pattern recognised by Google: People will search in general expecting results outside their area but often within a reasonable driving distance. If you actually move the map you get a context specific "search this area" button which doesn't zoom out. This isn't Google being unusable, this is Google being usable and user friendly for the majority of people while it may conflict with your particular use case.

Close, but still partially incorrect. In Google Maps, area based search is only context relevant after moving the map and performing your first search.

This is a direct example of the standard behavior of Google Maps:
1) I open the app (or go to the website).
2) It presents me with its best guess at my current location.
3) I slide the map and then zoom in to frame a particular area of about 10 square miles.
4) I enter my search.
5) The map automatically zooms out to show me results in the entire 500 square mile metropolitan area.
6) THEN, only after I have already allowed them to override my actions and show me their preferred, payola-weighted best guess at reading my mind, I am allowed to re-zoom the map back to my first 10sq. mile area and click "Search this area".
7) I then decide that yes, I do want to go to one of the locations I found, and while I'm there I want to also pick up something else if there's one of those businesses nearby. So I go up to the search box and enter that query.
8) The map zooms back out to show me multiple businesses 20-30 miles away.
9) I have to zoom back in (this is now the third time) to my original search area, in order to once again be given permission to "Search This Area".

Thing is, that approach wouldn't be nearly as dumb if online maps worked the way they did 20 years ago. It used to be that once you entered your search query, the matching locations would show up, and then as you slid the map up,down,sideways, other matching locations would dynamically populate as they entered the display. You did not have to repeatedly re-prompt it that yes, you do in fact want it to show you matches of your words within the geographic borders you set. The understanding was that you were performing a search, and the platform's job was merely to slavishly follow along with whatever actions you took, and keep serving you more results until you stopped. Suggested matches are not search results.

But the reason they stopped doing that is because they also stopped showing you all the matching results in your search area. They show you a combination of those Bell Curve guesses, what's trending, what others in your demographic clicked on, and, of course, what they've been paid to feature/promote. Or rather, to push those who haven't paid to page 3. Similar result. If you showed users all the results up front and let users dynamically pan and zoom around, my gosh, they would be able to more easily ignore and scroll past things they don't care about and jump around to find only the ones they do care about. And if you did that you couldn't grow to be a trillion dollar empire. So instead, every search begins with Suggestions. Force users to trigger their cognitive filtering/sorting functions to recognize the duds from the desired. Then, if the user changes anything, present them with another set of curated suggestion-guesses, so we have to re-engage the cognitive filtering/sorting. It's the same thing as modern news websites constantly re-positioning and re-designing advertisements, and then after reading 3 paragraphs they make you click "Next Page" to read the next three paragraphs. It forces your brain to at some level perceive and process the advertising and sponsored content before your brain can reset its rule for ignoring what you don't want.

even though I know for a fact there are numerous other locations matching my search within that same half mile.

How do you know that for a fact? Do you have access to Google's database?

There are dozens of different reasons people know things with certainty yet still need a Search function. I'll just pick one as an example: I know that for a fact because my sister, or allergist, or business partner is in that area, so I occasionally drive through it and I remember seeing there was a little cluster of Halal markets or thrift stores, and I want to check names, reviews, hours, etc. because I'll be seeing my sister/doctor/partner this week and on my way back home I could pick up some ground lamb or find a good cheap gag gift for an upcoming party. I know the businesses are in this specific area, but the platform is so intent on usurping my requests with what it suggests, that I now have to do extra actions to force it to cough up what it knows.

Consequently, our overall trust in these platforms suffers, and you no longer fully believe that it IS in fact showing you all it knows, rather than curating what's best for the platform to show you. It's why 15 years ago I would gladly spend 30 minutes searching Amazon for a particular product, and feel confident that if I were diligent and picked my keywords well, I could narrow down to a specific item that would in fact be the best thing for me. Nowadays, if I do bother to go to Amazon, I'll spend maybe 5-10 minutes at most, because by that point I've already seen whatever its en$hittification algo is willing to show me that DOES happen match my needs, plus I'm done having to constantly re-ignore those same Featured results which keep haunting my first couple pages of results no matter what I type.

tl;dr The very act of Searching used to be fun, interesting even. You could add/remove keywords to broaden or refine, and move laterally across the tremendous breadth of human culture, constantly discovering NEW things you never knew you never knew. As the entire Internet has deteriorated into little more than a vehicle for monetary arbitrage for globalists to exploit wealth-discrepancy gradients, I find that Search, like human life is nasty, brutish, and short, because it takes less and less time to see that everything's become a Funnel, everything is Choice Architecture. Every result is the samey-same pattern of subtle or not-so-subtle promotion and demotion. If you have any pattern-recognizing ability at all, you can clearly recognize what the system has pre-determined to give you, so you just Walkaway.

still tl;dr What's dumber, the AI that keeps giving you wrong information every time you ask it to count the number of Rs in strawberry? Or the conscious human being who keeps asking the AI that has clearly demonstrated its pervasive, critical flaws?

Comment Re:Never got the hate (Score 3, Interesting) 79

It still sucks big time.
It is faster than Google Maps but that is due to the complete lack of information and features.

You've convinced me to try it again.

G Maps, like G Search, has been calculating the fastest route to the destination called Unusability due to the complete bloat of "information and features" that help them monetize use. Meanwhile it is failing at core functions of a map.

For example, their UI/display always seems to have plenty of room to cram in a bunch of payola business listings in an area - sometimes drastically zooming out of my original search area to show them to me - but never seems to have room to just, you know, show me all the businesses matching the specific keywords I entered, within the specific area I searched.

It has so much room for featured/sponsored listings, even though I know for a fact there are numerous other locations matching my search within that same half mile.

It has so much room for featured/sponsored listings whose popup pins/labels take up screen space, but it still doesn't have room on the map for, you know, the names of the actual streets, which randomly disappear as you browse, so that you have to zoom way in or way out trying to get a picture of an area that actually maps that area.

Their AI is being trained to watch my face as I sleep and tell me what I dreamed last night as well as the meaning of the dream, but they can't figure out how to dynamically adjust the typeface of the 12 characters in "MLK Jr. Blvd" so they stay visible as I zoom in and out on a city neighborhood?

Comment Re:Targeted individuals... (Score 2) 91

"It's not the kind of nuclear program that potentially a foreign adversary could significantly impact by targeting 10 individuals."

Assuming you were trying to *kill* those 10 individuals to disrupt ongoing research, no it wouldn't make any significant difference.
But who's to say the missing individuals weren't kidnapped and taken somewhere? china? russia? iran?
Who's to say a foreign agent wasn't trying to recruit or kidnap individuals, and the dead ones represent failed attempts where they had to kill them to cover up their failed attempts?

One dead scientist doesn't make a huge amount of difference to the overall program, but one captured/defected scientist could spill a lot of secrets and significantly advance an enemy program.

"Who's to say it wasn't XYZ?" is classic random-conjecture/fantasy language.

The phrase is inherently vacuous. It doesn't even contain any meaningfully useful logic, because XYZ could be literally an infinite number of things without adding or subtracting any from the potential resolution of the question.
Who's to say they weren't harvested by Shiva?
Who's to say they weren't Raptured by Jesus?
Who's to say it wasn't Shiva pretending to be Jesus?
Who's to say it wasn't Loki pretending to be Shiva pretending to be Jesus?
Who's to say it wasn't Loki pretending to be Shiva pretending to be double-reverse Loki while pretending to be Jesus operating through Mossad?
etc. ad infinitum

Comment Re:Chatbot Lies (Score 2) 103

Hundreds of thousands of Juries - the Constitutionally-appointed deciders of culpability - have agreed that business owners, tool makers, property owners, and individuals behaving certain ways in public places, are in fact criminally and/or civilly responsible for the damages suffered by victims of their negligent choices. The quote from the AG will be very persuasive to many criminal and civil jurors: "My prosecutors have looked at this and they've told me, if it was a person on the other end of that screen [doing exactly what the chatbot did], we would be charging them with murder."

If you dig a pool in your front yard, put up no fence, invite anyone who wants to come play in your yard, and your neighbor's toddler falls into your pool and drowns, you'll find your culpability in front of a jury won't be summarily dismissed by saying, "A pool is just a place to have fun, it's an inanimate object, not a malevolent intentional conscious murderer. It is not my fault that someone abused the pool's intended function and it accidentally became an instrument of death".

Comment Re:Chatbot Lies (Score 5, Insightful) 103

Exactly, next people are going to be doing legal discovery on levi's jeans because the jeans helped the shooter keep his balls from flapping during the shooting. Stop trying to blame tools and keep the blame squarely on the human that does the evil thing.

Osama bin Laden was not on any of the planes that flew into buildings. All he did was sit there and help plan and train the people who did it.

Or, you go to a construction demolitions expert and ask him what's the best way to place explosives around the football stadium to make sure the exits collapse first so no one can escape. He looks at floor plans and pics, tells you what supplies you need, where to plant the charges, and how to rig the IEDs to blow simultaneously.
But all he gave you was information, so he has no legal or moral culpability for the death and destruction you cause?

Comment Re:Three reasons (Score 2) 44

Right, but presumably most of Meta's employees have skill sets that would let them acquire all three at another (less awful) company.

Not for anywhere near the same Comp&Ben scale.
Look around. See the increasing pace of convergence and conglomeration? Less awful companies can't compete. A mission statement of "Don't be evil" only carries you so far.

The real kicker of the absurd "Coal miners can learn to code!" mantra is that we now see it fails both ways -- it's just as absurd to expect a Senior $IT_Job_Title to get laid off and learn to be a plumber crawling through people's attics to connect their new water heater system at age 54.

Comment Re:Equilibrium (Score 3, Insightful) 59

You can say it like that, and make it sound super evil ... but none of us our crying over the 99.9% of horse shit sweepers who lost their jobs when automobiles were invented.

That's both an incorrect statement and an invalid comparison.

First, people did not lose their horse-related jobs "when automobiles were invented". The transition from horse/oxen power to machine power took 150 years. Jobs and economies gradually adapted over generations as steam locomotion spread, and then again over decades as automobiles spread. Trains and cars didn't take over the world 3-5 years after their major breakthroughs. There were millions of square miles of Earth's land surface where no railroad tracks or roads went, well into the mid-20th century. Even in the wealthy USA, it wasn't until after World War II that one car per household became the default. In the 1940s in the USA South there were still plenty of rural sharecroppers without a personal automobile. Same goes for working class folks in dense metropolitan areas with streetcar systems and city planning that did not yet prioritize cars and parking over all other needs.

Second, the automobile did not eliminate the need for powered motion. It's payload wasn't to eliminate movement. Its payload was to upgrade the method humans used for moving things to a more powerful method of moving things. The same humans were still moving the same things for generations/decades, because those things still needed to be moved. The AI/algo/agents being proffered this year are not saying "We'll take the same humans and give them a more powerful method of producing the same things they have been producing". They're not replacing human-driven horses that cart veggies to market, with human-driven trucks that cart veggies to market. They are making it so the veggies hop off the vines and drive themselves to the market.

I can understand the temptation to say, "Look at history-- there have been several big scares about The End Of Labor, but we always discovered some new market for goods and services several billion people could labor to produce". But this seems akin to saying that since the temperature change from 10C degrees to 25C isn't that bad and was composed of 3 changes of 5C each, that therefore we can confidently assume this pattern will remain true and the change from 25C to 40C will be just as comfortable.

That assumption fails for the same reason the "buggy whip manufacturer" comparison fails -- it presumes that the nature of human beings plays no role in determining whether a result succeeds or fails. It overgeneralizes the wrong rule: "Human metabolism can easily adapt to changes of 15C" instead of "Human metabolism can fairly easily adapt to most temperatures between 10C and 25C without major intervention". In the same manner, your argument overgeneralizes "The average human cognition which could easily adapt from manual labor in the fields to manual labor in the factories, can just as easily adapt to manipulating nuanced verbal abstractions of statistical inference within logic frameworks which update every 3-6 months".

Thus, it is not sufficient to merely hand-wave and say "Past changes have been adapted, so future changes will always be adapted". You must actually enter the fray and argue why the consequences of this particular change is a member of the set containing the consequences of all those previous changes. Otherwise what you have is not an argument, but a belief. Induction is not a proof of reality. Induction is a description of what we have observed, and that if the previous conditions hold, then we have reason to expect our observed event to repeat. But the "if" is doing a ton of heavy lifting.

Slashdot Top Deals

Committees have become so important nowadays that subcommittees have to be appointed to do the work.

Working...