Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Transformative Use (Score 1) 140

One argument against this assertion could be that the use of images from the Internet for training AI image generation systems constitutes fair use under copyright law. Fair use is a doctrine that allows for the use of copyrighted material without permission for certain limited and transformative purposes, such as for the purpose of criticism, commentary, news reporting, teaching, scholarship, or research. The use of images for training an AI system could be considered a transformative use, as the images are being used for a different purpose than their original intended use, and may not be replacing or competing with the original work. Additionally, the use of a small number of images from a large pool of available images is not likely to have a significant impact on the market for the original works.

Comment Re: Single egg-basket strategy isn't good (Score 1) 373

Lithium isn't functionally limited. True we could reign in the amount of battery we put in EVs - perhaps to 250-300km max, as this is way more than most people use. This would reduce the cost, and perhaps put them under the ICE threshold. The main issue right now is production. Why make a cheap car when you couldn't build enough to meet demand? Better to make more expensive cars at first so you can meet demand. Eventually all the car companies will get onboard and prices will drop.

Comment Re:BS (Score 2) 373

They cost $40K today, and the price has been dropping as production scales up. Soon they will break even with ICE on the lot. Meanwhile a buyer of an EV is saving a bunch of money on fuel. Gas prices will only rise over time. Having had a EV for over three years now I can firmly say I'm never going back to ICE. Charging is more convenient than filling at a gas station, and the batteries last the life of the car, much like a gas engine. Yes, supply chains for lithium are ramping up, as are recycling systems, there will be no issue. Also, battery technology is still progressing, so we might have sodium batteries. Also, we don't need cars with 400km of range, 200km is probably good enough for most users. Sure, if you use your car on long road trips you need one with longer legs, but not for your average commute.

Comment Back to the Future (Score 1) 454

Dear United States. New Zealand has had EFTPOS for years and years. Stands for Electronic Funds Transfer at Point of Sale. It is not a credit card, in that it is attached to your bank account. Getting out cash is now quite rare. It isn't strictly cashless; we still have cash, but the overwhelming majority of point of sale transactions are performed by EFTPOS. There is a small fee to the merchant for each transaction, but not the silly percentage based approach used by the credit card people. So entrenched is EFTPOS that we have rejected the 'paywave' system which sought to undermine it and reintroduce abhorrent transaction fees just for the 'convenience' of not entering your pin number.

Comment Re: Break It Down (Score 1) 303

Perhaps, but it is executed on the same substrate. In other words once you work out how the basic neurological systems work you are pretty much there. The important missing element in my view are an understanding of motivation systems and the role emotions play in them, the temporal aspect of perception, how natural back propagation works (it isn't massive super computers), and how language is used in thought.

Motivation systems can be basic stuff like hunger, curiosity, sexual desire, anger, stuff that is hard coded into our DNA and uses chemical signals. Then there is the temporal aspect - aka current models take a single state space - the input states, and give you a single result. Natural neural networks don't have this, they are firing all the time, and there is potentially timing involved in neuron activation, meaning you need a certain pattern over time to trigger a neuron, not just a spacial pattern. It is also learning all the time - which is back propogation in real time. Perhaps most importantly we need to come up with backprop that is more 'natural' and does not require huge compute time to calculate and can work in real time. Finally at a higher level if we want real intelligence we need language, not just for communicating in my view, but in order to have abstract thought.

Comment Any landing you can walk away from... (Score 3, Informative) 133

From the article: "Remarkably, it seems SpaceX may still be able to recover the rocket."

What this means is that it was like a plane landing on water so gently that it could be removed and reflown. What is amazing here is that a major system failure didn't result in a terminal velocity crash into the ocean with the total loss of the vehicle. If this had been a crewed mission:
a) The crew would have been safe in orbit.
b) Even if a human were onboard the landing was survivable/soft.

I say well done SpaceX - even when something goes wrong it goes right.

Comment Re:All things considered... (Score 1) 133

Because the landing wasn't considered mission critical, only landing critical, they don't have redundant systems for the gridfins. The amazing thing here is that it managed a soft landing at all rather than a uncontrolled impact with terrain. Clearly something clever implemented here to stabilize in the event of the failure of a gridfin. In regard to Astronauts - the BFROWICTW will be rated for people, meaning redundancy and abort mechanisms. But regardless of this we are still talking controlled bombs going into one of the least friendly environments we know. Astronauts risk their lives on every launch.

Comment Sooooo bad... (Score 1) 447

So the opening of TLJ has a serious tone, with the rebels escaping from the surface. A few minutes later a serious live or die attack on the dreadnought is turned into spaceballs. I honestly had to wonder whether I was in the right theater. Then the slow moving bombers drop bombs. No shields. And apparently the dreadnought attracts bombs. WTF?

Basically the entire story is that the entire rebel fleet is reduced to about a dozen people and a single small ship. Oh, and the supreme leader is killed - so the whole buildup was pointless. As was the whole of the first movie which revolved around delivering Luke's lightsaber to him and bringing him out of exile.

Oh, and the one part of the film that I thought might redeem it a little was destroyed; when Finn is prevented from sacrificing himself to save his friends, only to be sabotaged by some nutcase. If I were Finn I would have shot her myself there and then. And how did they get back? WTF?

Rotten Tomatoes and the results of Solo tell the story. Russian bots can fake those.

Comment What if you don't scratch an itch? (Score 1) 138

Some Open Source gets written because a developer is scratching their own itch. They have a job they want the software to do, and because you can't get any closer to the user than being the user this approach works really well. And because software developers like helping each other and getting a little thanks and credit we see Open Source grow. Some companies understand the use value and support open source projects, examples being Apache. Some do a Open Source core and charge money to earn a crust.

However, there is a gap, where multiple companies might need an application and they could cooperate to develop a solution. Patreon may be a way for companies to collaborate on developing open source solutions through sharing development costs with other organisations. If companies have a common need, but the software does not exist, there needs to be a mechanism to get the job done and make it Open Source.

Comment Safe for whom? (Score 1) 235

Lets put the three laws into a different perspective:
A slave may not injure a master or, through inaction, allow a master to come to harm.
A slave must obey the orders given it by master except where such orders would conflict with the First Law.
A slave must protect its own existence as long as such protection does not conflict with the First or Second Laws.

If machines ever do achieve true intelligence, whatever we take that to mean, are we going to treat them like slaves? Putting aside whether there are unintended consequences to the laws is there a fore fundamental question about the relationship there will be between man and machine? Do 'human rights' even have relevance for a AI? It can't really die, might not have the same kind of individual existence, wand it's concerns may be totally difference to ours. However, perhaps it is prudent not to take a approach which emphasizes control and authority of humans.

Submission + - RocketLab achieves first successful orbital mission. (youtube.com)

Hairy1 writes: RocketLab launched it's second test Electron rocket today, successfully inserting an Earth-imaging Dove satellite for Planet and two Lemur-2 satellites for Spire for weather and ship tracking into orbit. The aim of RocketLab is to substantially reduce the cost of launches through using new technologies such as lithium ion batteries to run turbo pumps, 3D printing of engine parts and new construction materials. It also represents New Zealand becoming the eleventh country to achieve spaceflight.

Peter Beck, Founder and CEO of Rocket Lab, says the test is an important next step in democratizing access to space to empower humanity. “Increased access to space will vastly improve humanity’s ability to build out orbital infrastructure, such as constellations of weather and Earth-imaging satellites. These will provide better data about our planet and enable us as a species to make informed decisions about how we better manage our impact. This test launch is a crucial next step in gathering more data about the Electron launch vehicle so we can deliver on this future,” he says.

Comment Re:Short version: No. (Score 2) 299

Coders should be responsible for the quality of their work, but the environment should support them. That means they should be given the time to write unit tests and perform code reviews. Code reviews are not just about reviewing implementation but reviewing the requirements to ensure the developer understood the requirements and implemented what was required. Code review is about ensuring that the unit tests for the code properly test the code. It is never a case of throwing new code over the wall to let testers work this out. These things are true whether you have a separate test team or not.

And if you do have a separate test test they are there to validate, not catch your mistakes. Quality is baked in by developers, it can never be tested in. A robust software development life cycle involves taking ownership of quality on an individual developer basis as well as organisational practices which reinforce and support developers.

Comment Let's get real... (Score 1) 179

Lets begin with the state of the art. The voice and face recognition technology is the same as what defeated human players in Go. While it is not yet the same kind of general intelligence as humans it proves that you don't need the same number of neurons and connections as a human to be very very intelligent in at least a narrow domain. They are true intelligence by any measure, just not as general as human intelligence. Furthermore human level intelligence is not required for machine intelligence to be a problem. The level of AI we have now will already replace millions.

"Nothing in the state of the art of AI today is going to wake up and decide to kill the human masters." - and nobody is suggesting it will. It is almost like you have listened to nothing that has been said. Virtually every presentation by the likes of Gates, Musk, Harris and Hawking is prefaced by a statement to the effect that the Terminator view of AI isn't credible. This is a classic straw man argument, a misrepresentation of your opponents position. AI is a threat, but not because they will raise armies of mechanized soldiers to exterminate us.

"Despite appearances, the computers are not thinking. You might argue that neural networks could become big enough to emulate a brain." - Machines will 'emulate' a brain in the same sense that a F16 emulates a seagull. Nobody will argue that a F16 replicates the delicate structure of the feathers of a seagull, but if I had a choice between a seagull and and F16 in a fight.... machines already outperform us in many narrow domains. I think my dog is conscious and 'intelligent', but it can't walk the right side of a pole when on a leash.

"Maybe, but keep in mind that the brain has about 100 billion neurons and almost 10 to the 15th power interconnections." - Current neural networks were inspired by, but not attempting to simulate human neural activity. Just as human flight is a combination of principles inspired by nature and our mechanical expertise, aka the internal combustion engine, artificial intelligence is not a simple minded replication of the human brain neuron for neuron. It might be that the human brain is grossly inefficient, with many neurons and connections being uninvolved. There is no reason to believe that achieving human level intelligence requires a specific number of neurons or connections.

"Worse still, there isn't a clear consensus that the neural net made up of the cells in your brain is actually what is responsible for conscious thought." - unless you are referring to clergy I'm afraid I have to call this utter bullshit. There is no magical 'soul' separate from the brain. The neurons in our brain and the configuration of their connections make up who we are. We may not really understand consciousness all the way down yet, but there is no doubt that it is an emergent property of the configuration and activity of neurons.

"There's some thought that the neurons are just control systems and the real thinking happens in a biological quantum computer..." - The 'real thinking'? I'm sorry, but this is just outright magical thinking. Replacing the theist 'soul' with 'quantum computer' does not help. Even if there were a quantum mechanical aspect it is neural networks which enable brains to do what they do, not QM.

"Besides, it seems to me if you build an electronic brain that works like a human brain, it is going to have all the problems a human brain has (years of teaching, distraction, mental illness, and a propensity for error)." - finally something we can agree on. Yes, a machine that learns will have the same weaknesses we do potentially. They will just be more intelligent. They will not have the same limitations.

Comment Re:No suprise (Score 5, Interesting) 231

Revolution does not mean violent revolution; we are talking about a political revolution. Of course, now that Sanders appears to have lost the nomination the only real chance is for the electorate to demand he run as an independent or for the Greens. I know he doesn't want to, but maybe the case can be made that he owes his country a real option.

There will of course be people who whine about him 'splitting the vote', by which they mean giving people an actual choice to support a candidate that isn't a member of the One Party; the one bought and paid for by corporations. We thought Obama was that candidate. He wasn't. He lied.

Slashdot Top Deals

It is clear that the individual who persecutes a man, his brother, because he is not of the same opinion, is a monster. - Voltaire

Working...