Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Submission + - Brian Krebs is back online, with Google Cloud Hosting (krebsonsecurity.com)

Gumbercules!! writes: After the massive 600mbps DDOS on http://krebsonsecurity.com/ last week that forced Akamai to withdraw the (pro-bono) DDOS protection they offered the site, krebsonsecurity.com is now back online, hosted by Google.

Following Brian Krebs breaking an article on vDOS (https://developers.slashdot.org/story/16/09/08/2050238/israeli-ddos-provider-vdos-earned-600000-in-two-years), leading to the arrest of the two founders, his site was hit with a record breaking DDOS. It will certainly be an interesting test of Google's ability to provide DDOS protection to clients.

The Internet

Who Is Getting Left Behind In the Internet Revolution? (sciencemag.org) 112

Reader sciencehabit writes: The internet is often hailed as a liberating technology. No matter who you are or what kind of country you live in, your voice can be amplified online and heard around the world. But that assumes that people can get on the internet in the first place. Research has shown that poverty and remoteness can prevent people from getting online, but a new study out today also shows that just belonging to a politically marginalized group can translate to poorer access. The study, published online today in Science, provides the first global map of the people being left behind by the internet revolution. Mapping the internet is hard. Although it is true that every computer with a connection has a real-world location, no one actually knows where they all are. Rather than being organized top-down, the world's computers are connected to each other by a bushy, redundant network of servers. Each country builds and maintains its own infrastructure for connecting citizens to the wider internet. The decision to expand and maintain the infrastructure in one region and not another is up to those in power. And therein lies the problem: Ethnic and religious minorities who are excluded from their country's political process may also be systematically excluded from the global internet.

Submission + - White House Names First Federal CISO

wiredmikey writes: The White House today announced that Brigadier General (retired) Gregory J. Touhill has been named the first Federal Chief Information Security Officer (CISO). Back in February, President Barack Obama unveiled a cybersecurity "national action plan" (CNAP) which called for an overhaul of aging government networks and a high-level commission to boost security awareness. As part of the plan, the White House said it would hire a federal CISO to direct cybersecurity across the federal government. General Touhill is currently the Deputy Assistant Secretary for Cybersecurity and Communications in the Office of Cybersecurity and Communications (CS&C) at the Department of Homeland Security (DHS).

The key hire comes at a time when the government needs cybersecurity talent more than ever. Earlier this week a report published a U.S. House of Representatives Committee said the data breaches disclosed by the Office of Personnel Management (OPM) last year were a result of culture and leadership failures, and should not be blamed on technology.

Submission + - Hot Debate Raging on The Proposed Super Particle Collider in China (scmp.com)

hackingbear writes: Chinese high-energy physicists proposed four years ago to build a particle collider four times the size of the Large Hadron Collider in Europe. On Sunday, Dr Yang Chen-ning, co-winner of the Nobel Prize in physics in 1957 and now living on campus at Tsinghua University in Beijing, released an article on WeChat opposing the construction of the collider. He said the project would become an investment “black hole” with little scientific value or benefit to society, sucking resources away from other research sectors such as life sciences and quantum physics. Yang’s article hit nearly all social media platforms and internet news portals, drawing tens of thousands of positive comments over the last couple of days. The first stage of the project was estimated to cost 40 billion yuan (US$6 billion) by 2030, and the total cost would exceed 140 billion yuan (US$21 billion) when construction is completed in 2050, making it the most expensive research facility built in China. Yang’s main argument was that China would not succeed where the United States had failed. A similar project had been proposed in the US but was eventually cancelled in 2012 as the construction far exceeded the initial budget. Yang said existing facilities including the Large Hadron Collider contributed little to the increase of human knowledge and was irrelevant to most people’s daily lives. But Dr Wang Yifang, lead scientist of the project with the Chinese Academy of Sciences’ Institute of High Energy Physics, argued research in high energy physics lead to the world wide web, mobile phone touch screens and magnetic resonance imaging in hospitals, among other technological breakthroughs.

Submission + - Did China suffer the first space launch failure of 2016? (gbtimes.com) 1

schwit1 writes: A scheduled Chinese launch has apparently ended in failure, though exactly what happened remains presently unknown.

China was early this morning expected to launch its Gaofen-10 Earth observation satellite from Taiyuan, following the issuance of an airspace exclusion zone days in advance. However, it seems the launch did not go to plan. Gaofen-10, nominally part of the ‘CHEOS’ Earth observation system for civilian purposes, was due to be launched on a Long March 4C rocket between 18:46 and 19:11 UTC on Wednesday (02:46-03:11 Thursday Beijing time). China usually releases information of launches once payloads are successfully heading towards their target orbits around an hour after launch. Much earlier, spectators and insiders often share details and photos of the launch on social media.

However, many hours after the launch window passed there was still silence, with the launch timing and location of the Taiyuan Satellite Launch Centre apparently limiting opportunities for outside viewers.

The launch however was not scrubbed, as first stage launch debris was found as expected along the flight path, suggesting that some failure occurred with the upper stage.

Like today’s Falcon 9 failure, this Chinese failure could have a rippling effect on their ambitious plans this fall, including the launch of their next space station followed by a 30-day manned mission.

Submission + - Clinton E-Mail server was hacked, so says the FBI (politico.com)

An anonymous reader writes: Politico has an article today which states that An unknown individual using the encrypted privacy tool Tor to hide their tracks accessed an email account on a Clinton family server, the FBI revealed Friday.

The FBI disclosed the event in its newly released report on the former secretary of state’s handling of classified information. According to the bureau’s review of server logs, someone accessed an email account on Jan. 5, 2013, using three IP addresses known to serve as Tor “exit nodes”

Submission + - Palo Alto, CA to ban software coding firms? (nytimes.com)

davemc writes: Palo Alto to ban software coding firms? Seems there is a little-known (and definitely not enforced) regulation on R&D. Including software coding. I guess the residents in Palo Alto already have their slice of the pie, so want to ban pies altogether.

"... the mayor is looking to enforce, in some form, an all-but-forgotten zoning regulation that bans companies whose primary business is research and development, including software coding. (To repeat: The mayor is considering enforcing a ban on coding at ground zero of Silicon Valley.)"

Submission + - Russians Hacked Arizona Voter Registration Database -Official (time.com)

alir1272 writes: Russians were responsible for the recent breach of Arizona’s voter registration system, the FBI told state officials in June.

Matt Roberts, a spokesman for Arizona Secretary of State Michele Reagan said on Monday that FBI investigators did not say whether the hackers were working for the Russian government or not, the Washington Post reported. He said hackers gained access after stealing the username and password of an election official in Gila County, rather than compromising the state or county system.

Comment Web-skewed (Score 1) 241

Anyone can put up a web page, and Javascript and PHP have a large footprint there. (I guess Java, on the enterprise server side?) It's not hard to imagine there's lots of folks that have to deal with these languages as part of their larger duties, but aren't really trained as programmers in any traditional sense. That could fuel a bunch of StackOverflow traffic for sure...

Whichever ranking you look at will be skewed by the methodology. It feels like web-oriented languages are overemphasized in this cut.

Of course, my own worldview is skewed, too. I deal more with low-level hardware, OS interactions, etc. You won't find a lick of Javascript or PHP anywhere near any of the stuff I work on daily. Lots of C, C++, some Go and Python.

Comment Re:It does almost nothing very very fast (Score 1) 205

Ah, OK, so it is more or less the latest version of ASaP/ASaP2. I just made a post up-thread about my memory of ASaP. It looked interesting, but as you point out, it has some real practical issues.

At the time we spoke with them, it sounded like whenever you loaded an algorithm chain, you had to map it to the specific chip you were going to run it on, even, to account for bad cores, different core speeds, etc. Each core has a local oscillator. Whee...

Comment Re:I guess this is great (Score 1) 205

I'm familiar with Dr. Baas' older work (ASaP and ASaP2). He presented his work to a team of processor architects I was a part of several years ago.

At least at that time (which, as I said, was several years ago), one class of algorithms they were looking at was signal processing chains, where the processing steps could be described as a directed graph of processing steps. The ASaP compiler would then decompose the computational kernels so that the compute / storage / bandwidth requirements were roughly equal in each subdivision, and then allocate nodes in the resulting, reduced graphs to processors in the array.

(By roughly equal, I mean that each core would hit its bottleneck at roughly the same time as the others whenever possible, whether it be compute or bandwidth. For storage, you were limited to the tiny memory on each processor, unless you grabbed a neighbor and used it solely for its memory.)

The actual array had a straightforward Manhattan routing scheme, where each node could talk to its neighbors, or bypass a neighbor and reach two nodes away (IIRC), with a small latency penalty. Communication was scoreboarded, so each processor ran when it had data and room in its output buffer, and would locally stall if it couldn't input or output. The graph mapping scheme was pretty flexible, and it could account for heterogenous core mixes. For example, you could have a few cores with "more expensive" operations only needed by a few stages of the algorithm. Or, interestingly, avoid bad cores, routing around them.

It was a GALS design (Globally Asynchronous, Locally Synchronous), meaning that each of the cores were running slightly different frequencies. That alone makes the cores slightly heterogeneous. IIRC, the mapping algorithm could take that into account as well. In fact, as I recall, you pretty much needed to remap your algorithm to the specific chip you had in-hand to ensure best operation.

The examples we saw included stuff familiar to the business I was in—DSP—and included stuff like WiFi router stacks, various kinds of modem processing pipelines, and I believe some video processing pipelines. The processors themselves had very little memory, and in fact some algorithms would borrow a neighboring core just for its RAM, if it needed it for intermediate results or lookup tables. I think FFT was one example, where the sine tables ended up stored in the neighbor.

That mapping technology reminds me quite a lot of synthesis technologies for FPGAs, or maybe the mapping technologies they use to compile a large design for simulation on a box like Cadence's Palladium. The big difference is granularity. Instead of lookup-table (LUT) cells, and gate-level mapping, you're operating at the level of a simple loop kernel.

Lots of interesting workloads could run on such a device, particularly if they have heterogenous compute stages. Large matrix computations aren't as interesting. They need to touch a lot of data, and they're doing the same basic operations across all the elements. So, it doesn't serve the lower levels of the machine learning/machine vision stacks well. But the middle layer, which focuses on decision-guided computation, may benefit from large numbers of nimble cores that can dynamically load balance a little better across the whole net.

I haven't read the KiloCore paper yet, but I suspect it draws on the ASaP/ASaP2 legacy. The blurb certainly reminds me of that work.

And what's funny, is about 2 days before they announced KiloCore, I was just describing Dr. Baas' work to someone else. I shouldn't have been surprised he was working on something interesting.

Comment Re:Yes. (Score 1) 143

Came here to say the same thing. The nice thing about a compact proof is that it may generalize to other situations or offer greater insights. This is certainly not a compact proof. But, to say it's not a proof is ludicrous. It's a very explicit and detailed proof.

It's the difference between adding up the numbers 1 through 100 sequentially (perhaps by counting on your fingers even), and using Gauss' insight to take a short cut. The computer didn't take any insight-yielding shortcuts, but still got the answer.

________

(And yes, Gauss' story is probably apocryphal; but still the difference between the approaches is what I'm getting at.)

(I say "insight-yielding shortcut" to distinguish it from the many heuristics that modern SAT solvers use, including the one used here.)

Comment Re:Well no kidding (Score 1) 94

I was about to mention NYC. I wouldn't say it works fine, but it does work better than most places. The subway stations also have a fairly restricted number of people at a time though, and that is where it works best for me. Also, not everybody business and local is trying to down on it 24/7. I wonder what the peak usage is? I guarantee it is much lower than other places.

Comment Re:"free" never fails to disapoint (Score 1) 94

His point is that the government will never had enough of oversight of itself -- it shouldn't have had this bad of a failure for so long -- to fix these problems. Saying it would all be better if the government just did something it historically never been able to do well is a fool's dream. Lacking a profit motive, governments have very little natural force correcting them, especially when it comes to bureaucrats paid according to union standards and protected by them. They really don't care if anything works out at long as then can show then put forth even the smallest effort.

You asking for more regulation after giving tax money to a corporation reminds me of the Ronald Reagan quote: "If is move, tax it. It is keeps moving, regulate it. If it stops moving, subsidize it."

The problem with government run anything (from democratic socialism to Marxism) is that it never works out like its planned, and those that support it just keep up with the same hitting their head again the wall mantra, "But that wasn't what was supposed to happen; that wasn't real [insert personal economic philosophy]." They never seem to learn that there will always be a huge gulf between theory and practice when it comes to the political economy. Capitalism self corrects around it and uses our worst side -- the greed -- to make the world better.

Submission + - Clean out Distros

wnfJv8eC writes: There needs to a user site to survey distro packages. I just went to remove xfsprogs, some hang over from SGI from the early, very early 2000s. Why is Gnome dependent on this package? Remove it, remove Gnome? Really? The dependency tree is all screwed up. Never mind XFS, which by now I can't imagine anyone using, why aren't such addons a plugin? Why are they still supported. Who uses them. Once Linux dropped support for minix. Now one used it.
It's time for a house cleaning. That starts with a good vote on what is and isn't being used. Then dependency trees can be corrected, not just grandfathered in.
There are many examples of stupid dependencies. For example Rhythmbox requires gvfs-afc, which rpm -qi describes as "This package provides support for reading files on mobile devices including phones and music players to applications using gvfs."
So if I never plug my phone or other mobile device into my computer to play music I must have this thing loaded and running? But remove gvfs-afc, and pull Rhythmbox. The dependency is all wrong.

Slashdot Top Deals

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...